Search Results

Search found 15247 results on 610 pages for 'state machines'.

Page 113/610 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Problems setting up a VPN: can connect but can't ping anyone

    - by Fernando
    This is my first time setting a VPN. Clients can connect but can't ping other machines. This is certainly a route problem but i can't find the right way to configure it. Here is a sample example of the two LANS i want to connect: So, i want machines from 192.168.1.0/24 being able to connect with 192.168.0.0/24 as if they were on the same network. For the VPN network, i would like to use the 10.0.0.0/24 range. Here is my server.conf: proto udp port 1194 dev tun server 10.0.0.0 255.255.255.0 push "route 192.168.0.0 255.255.255.0 192.168.0.1" push "dhcp-option DNS 192.168.0.1" push "dhcp-option WINS 192.168.0.1" comp-lzo keepalive 10 120 float max-clients 10 persist-key persist-tun log-append /var/log/openvpn.log verb 6 tls-server dh /etc/openvpn/keys/dh1024.pem ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key tls-auth /etc/openvpn/keys/mykey.key 0 status /var/log/openvpn.stats And one of my clients 192.168.1.2: client dev tap proto udp remote my.no-ip.address 1194 route 192.168.1.0 255.0.0.0 192.168.1.1 3 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.crt" key "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.key" tls-auth "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\mykey.key" 1 ns-cert-type server cipher BF-CBC comp-lzo verb 1 What exactly i am doing wrong? All machines can connect to openvpn but the ping doesn't work. At the client log i see the following error: Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: failed to parse/resolve route for host/network: 10.0.0.1 Thanks!

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Prevent Cisco VPN from interrupting home networking

    - by jkohlhepp
    I just got a new laptop, and for the most part have left its settings alone. Today I was trying to get some sharing going between my desktop and the laptop. Both machines are connected to the same wireless network and both machines consider that network to be a Home network. Both are running Win7 Home Premium. It seems like my laptop is aware of my desktop on the network. It can ping it by IP or by computer name. When I go to Network from the laptop, I can see the desktop in the list of computers. However, my desktop cannot ping the laptop, nor can it see it within Network. My desktop has a Homegroup set up, but my laptop says "There is currently no homegroup on the network". I do have network discovery turned on for both machines. Why can my desktop not "talk" to my laptop but it works the other way around? Update: Disabling the Windows Firewall on the laptop somewhat fixes the problem. With it disabled, my desktop can ping my laptop, but still my laptop can't see the homegroup. Also, it can ping via hostname, which resolves to IPv6, but can't ping via the IPv4 address. Obviously I'd rather not leave my firewall disabled, so I need a more specific fix. Update 2: Aha! It is the Cisco VPN software I was running to connect to work computers. Once I disconnected and exited from that, the two PCs seemed to be talking normally and the homegroup was visible to the laptop. So now my question has morphed: how can I prevent Cisco VPN from interrupting my home networking?

    Read the article

  • Enabling WinRM by Group Policy

    - by SaintNick
    I'm having partial success enabling WinRM through Active Directory GPO's on our Server 2008 R2 environment. I've created a GPO that enables "Allow automatic configuration of listeners" and also enables all the necessary predefined WinRM Firewall rules. This GPO works fine for our webservers. Indeed, this is reflected by the "Server Manager Remote Management" nicely flipping to "enabled" in Server Manager Server Summary. However, the same GPO applied to both our Management servers, which are Domain Controllers, does not give the same result. I see the GPO settings being applied, including the listener as confirmed by C:\Windows\system32>winrm e winrm/config/listener Listener [Source="GPO"] Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.32.40.210, 10.32.40.211, 10.32.40.212 But in Server Manager, Server Summary, Remote Management remains on "disabled" and indeed when trying to connect to one of these machines Server Manager gives an "Access Denied". Manually enabling WinRM locally via Server Manager "Configure Server Manager Remote Management" on either of these machines works fine. What can be the cause? Can it have something to do with theses machines being DC's and needing extra settings in the GPO? Nick Reid

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • After update, suddenly lost ability to access Windows Server 2008 R2 shares from Windows XP clients

    - by Knute Knudsen
    Today I lost the ability to see my Windows Server 2008 R2 shares from any of my 3 Windows XP machines in my small office. The 5 Win7 machines haven't been affected (they are still able to browse/access the 2008 server), but none of my WinXP machines can access the 2008R2 server anymore. Yesterday (and for the previous year) everything was working fine. I do not have a domain setup. I can still access Win7 shares from WinXP clients. Browsing the server logs, I see that the following update was installed last night: > Installation Ready: The following updates are downloaded and ready for > installation. This computer is currently scheduled to install these > updates on ?Thursday, ?November ?15, ?2012 at 3:00 AM: > - Security Update for Windows Server 2008 R2 x64 Edition (KB2761226) > - Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2729452) > - Windows Malicious Software Removal Tool x64 - November 2012 (KB890830) > - Cumulative Security Update for Internet Explorer 9 for Windows Server 2008 R2 x64 Edition (KB2761451) It seems likely that something was changed in last night's update, but so far I haven't seen anything on microsoft.com to prove it. I did hear that XP is reaching the end of the road soon. Any ideas?

    Read the article

  • Windows Server 2008R2 Virtual Lab Activation strategies?

    - by William Hilsum
    I have a ESXi server that I use for testing, however, I am often needing to create additional Windows Server virtual machines. Typically, if I do not need a VM for more than 30 days, I simply do not activate. However, I have been doing a lot of HA/DRS testing recently and I have had a few servers up for more than this time. I have a MSDN account with Microsoft and have already received extra keys for Windows Server 2008 R2. I am doing nothing illegal and I am sure if I asked, they would issue more - but, I do not want to tempt fate! I have got 3 different "activated" windows snapshots I can get to at any time. If I try to clone these machines, I get the usual "did you copy or move them VM" message. If I choose copy, as far as I can see, it changes the BIOS ID and NIC MACs which is enough to disable activation. If I choose move, it keeps the activation fine (obviously, I know to change the NIC MAC - I believe I can leave the BIOS ID without problems). However, either of these options keeps the same SID code for the computer and user accounts. After the activation period has expired, as far as I can see, all that happens is optional updates do not work - it seems that the normal updates work fine. Based on this, as you can easily get in to Windows when not activated without any sort of workaround, I was wondering if it is ok just to leave a machine un activated? (However, I obviously would prefer if it was activated!) Alternatively, how dangerous is it run multiple machines on a non domain environment with the same SID? I am just interested to know if anyone can recommend a strategy for me? I have only found one solution that deals with bypassing activation - I am not interested in doing anything remotely dodgy... at a stretch, I am happy to rearm (I have never needed to keep a server past 100 days), but, I would rather have a proper strategy in place.

    Read the article

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • Error 53 - The network path was not found.

    - by Jack
    I have a machine in my Active Directory Domain that I can no longer "net view" from other machines in the domain. This is a Windows XP Pro machine. It is hosting a VMWare virtual of my Domain Controller. If I attempt to net view [machine name] I get system error 53, The network path was not found. This is not a DNS issue, the same thing happens with the machine's IP. I don't think it's a firewall issue, I turned the firewall off on this machine. As I mentioned, it has worked in the past, and then stopped for no reason that I can see. I (intentionally) didn't change the software. I CAN get to the VMs hosted on this machine, can connect to their shares, net view them, etc. All other machines can see each other. In fact, the problem machine can see other machines and access their shares just fine. I tried removing the machine from the domain and re-adding it. I tried deleting the shares and recreating them. Not sure how to troubleshoot this any further. Any ideas?

    Read the article

  • How to configure TFTPD32 to ignore non PXE DHCP requests?

    - by Ingmar Hupp
    I want to give our Windows guy a way of easily PXE booting machines for deployment by plugging his laptop into one of our site networks. I've set up a TFTPD32 configuration which does just that, and our normal DHCP server ignores the PXE DHCP requests due to them having some magic flag, so this part works as desired. However I'm not sure how to configure TFTPD32 to only respond to PXE DHCP requests (the ones with the magic flag) and ignore all normal DHCP requests (so that the production machines don't get a non-routed address from the PXE server). How do I configured TFTPD32 to ignore these non-PXE DHCP requests? Or if it can't, is there another equally easy to use piece of software that he can run on his Windows laptop? Since the TFTPD part is working fine, a DHCP server with the ability to serve PXE only would do. Worst case I'll have to set up a virtual machine with all this, but I'd much prefer a small, simple solution. I'm not interested in solutions that involve using the existing DHCP servers or separating machines on the network for deployment, the whole point is to be simple and stand-alone.

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Referencing SQL Server 2008 R2 SMO from Visual Studio 2010

    - by user69508
    Hello. We read a number of things about referencing SQL Server SMO from Visual Studio but still don't have the definite answers we need. So, here it goes... A number of years ago we created a C# application using Visual Studio 2005 and SQL Server 2005. In that application, we added .NET references to a number of SQL Server SMO objects, and everything worked fine. Those references were: Microsoft.SqlServer.ConnectionInfo GAC 9.0.242.0 Microsoft.SqlServer.Smo GAC 9.0.242.0 Microsoft.SqlServer.SqlEnum GAC 9.0.242.0 We have now migrated to Visual Studio 2010 and SQL Server 2008 R2. However, when we try to reference those same SMO objects for SQL Server 2008 R2, they don't appear in the .NET references tab. We're wanting to reference the SQL Server 2008 R2 version of those same SMO assemblies for our upgraded C# application. On our development machines, we have SQL Server 2008 R2 Developer installed with all options, including the SDK such that the assemblies are found in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, my first questions are: Are we supposed to do file references to the SMO assemblies instead of .NET references in Visual Studio 2010 w/ SQL Server 2008 R2? Or, is there some problem with our development machines such that the SMO assemblies are not appearing in the .NET references tab? Next, our production machines will have SQL Server 2008 R2 Workgroup installed with the client tools option selected, thus providing those same SMO assemblies in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, the next questions are: When we release to production, are we supposed to redistribute the SMO assemblies with our application? Or, will our application work on the production servers without redistributing the SMO assemblies (since the client tools/SMO assemblies have been installed)? What else????? Thanks for the help!

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • OS X won't see Windows 7 in network (and vice versa)

    - by meds
    I've enabled SMB sharing in OS X Lion and have added folders to share, it says 'Windows Sharing: On' with a green circle next to it (from the sharing window) and that to access the volume I will need to to go to \\192.168.0.17. It also says that the OS X should be visible as 'macbook' in the network. Both my WIndows 7 and OS X are connected to the same network, yet when I try to go to \\192.168.0.17 or from the Mac try to go to my Windows system (smb://192.168.0.6) the two OSs don't see each other. Any ideas why? Attempting to ping the Mac from Windows results in this output in the command prompt: Pinging 192.168.0.17 with 32 bytes of data: Reply from 192.168.0.6: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.17: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), ipconfig in Windows is: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::8918:efd1:b05c:890f%21 IPv4 Address. . . . . . . . . . . : 192.168.0.6 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::98ab:63fc:3c07:d837%13 IPv4 Address. . . . . . . . . . . : 192.168.74.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::80ff:c575:7b50:3a10%14 IPv4 Address. . . . . . . . . . . : 192.168.21.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{2E97D0AE-9E18-4072-AC23-1979BA0DCB79}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{E260CE43-E9A7-4DE0-A88E-4EAFF68ACDDB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A5130812-59CE-4DDF-9C35-9433BCED9831}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{134BCAE7-CFFF-4A98-8DA0-3708806AABEB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{8D9E3B8F-161C-4ACE-B211-3EDD694416B2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : in OS X: lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4> ether c8:2a:14:01:24:c1 media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether e0:f8:47:0c:fe:04 inet6 fe80::e2f8:47ff:fe0c:fe04%en1 prefixlen 64 scopeid 0x5 inet 192.168.0.17 netmask 0xffffff00 broadcast 192.168.0.255 media: autoselect status: active p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304 ether 02:f8:47:0c:fe:04 media: autoselect status: inactive fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 70:cd:60:ff:fe:d8:f1:32 media: autoselect <full-duplex> status: inactive

    Read the article

  • Improving IO with FlashCache

    - by Devator
    I have a server with 2 HDD's (2x 1 TB), running in RAID 1 (SW-RAID). I want to improve IO performance by using flashcache. There are running KVM virtual machines on it, using LVM. Regarding this, I have the following questions: Will this even work? flashcache works for block devices, however these are all virtual machines with their own setup. How much would I expect to increase performance? Most virtual machines run websites and some host games. How big does the SSD needs to be? Would having a bigger SSD increase performance since it's able to cache more files? What happens if the SSD dies? Would flashcache retrieve files from the traditional HDD and I could simply replace the SSD? How much faster would writeback be in comparison with writethrough and writearound? I have no access to a test system unfortunately, so could I install flashcache on a live server without unmounting the the disks? I found a great tutorial here which I would be using.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Sync clock on Windows XP machine to external (non-domain, non-workgroup) Windows Server 2008 R2 machine

    - by Eric
    I have two machines and I'd like their clocks to be in sync for various reasons. Machine 1 is an XP machine located in the office. Machine 2 is a VPS hosted by a third party running Windows Server 2008 R2. These machines are not in any kind of workgroup or on a domain together. They are completely separate machines. Machine 2 is currently syncing once a week to time.windows.com. The clock on Machine 2 does seem to wander a bit within that week interval. What I would like to do is have Machine 1 set its clock based on the clock of Machine 2. I have tried configuring w32tm on the XP machine. This is what I used for configuration: w32tm /config /syncfromflags:manual /manualpeerlist:"<ip address of machine 2>" However, whenever I issue the /resync command I get "The computer did not resync because no time data was available". I have made sure to start the windows time service on machine 2, and I have added firewall exceptions for UDP port 123. Is there something I need to configure on Machine 2 (other than just starting the time service) in order to get it to respond? Edit: I have also run w32tm /config /reliable:YES /update on Machine 2. I am still getting "The computer did not resync because no time data was available". Is there something else I'm missing?

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by dotnetdev
    I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • How can I track down the cause of ext3 filesystem corruption?

    - by Jon Buys
    We have a VMware vSphere 5 environment running CentOS 5.8 virtual machines. In the past two weeks we have had five incidents of virtual machines having a filesytem become corrupt, requiring an fsck to repair. Here is what we see in the logs: Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392098: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:39:28 hostname kernel: Aborting journal on device dm-2. Nov 14 14:39:28 hostname kernel: __journal_remove_journal_head: freeing b_committed_data Nov 14 14:39:28 hostname last message repeated 4 times Nov 14 14:39:28 hostname kernel: ext3_abort called. Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Nov 14 14:39:28 hostname kernel: Remounting filesystem read-only Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392099: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:31:17 hostname ntpd[3041]: synchronized to 194.238.48.2, stratum 2 Nov 14 15:00:40 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2162743: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 15:13:17 hostname kernel: __journal_remove_journal_head: freeing b_committed_data The problem seems to happen while we are rsync'ing application data from another server. So far we have been unable to reproduce the problem, or identify a root cause. After we had a few servers have this problem, we assumed that there was an issue with the template, so we scrapped all VM's cloned off of the template, destroyed the template, and built a new template from scratch, installed from a newly downloaded CentOS ISO. We use HP EVA SAN's for datastores, and moved from a 4400 to a 6300 after the first problem. Since the move and rebuilding new virtual machines we have seen the issue twice. On one VM we shut down the server, removed two virtual CPUs, and booted it back up again, the problem presented itself almost immediately. On the other VM, we rebooted it, and the problem happened a half hour later. Any tips or pointers in the right direction would be appreciated.

    Read the article

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >