Search Results

Search found 18976 results on 760 pages for 'ash machine'.

Page 132/760 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • VPN service into 192 network

    - by tophersmith116
    I'm thinking about setting up a security testing lab. I work on a switched network, and that just makes for unnecessary headaches when doing testing. I'd like to create a 192 network with a few machines inside for DBs and AppServers etc. I will need a pivot machine that connects to both the outer network and the 192 (for automation purposes). But I'd like to be able to connect into the 192 network with my own machine from the outer network as the "attacking" machine (rather than have dedicated attack machines inside the 192 network). Therefore, I'd like to have the pivot server be a VPN server as well, so that my machine can VPN into the 192 network from the outer network. First off, is this even possible? Can I have a single computer with two NICs where a VPN service allows remote connections into the 192? Secondly, I'd like to have multiple outer clients connect to the VPN. Does anyone have any suggestions? I've used Hamachi well before, but I've also seen some good stuff from OpenVPN.

    Read the article

  • SSH Connection Error : No route to host

    - by dewbot
    There are three machines in this scenario: Desktop A : [email protected] Laptop A : [email protected] Machine B : [email protected] All the machines have Ubuntu 11.04 (Desktop A is a 64bit one) and have both openssh-server and openssh-client. Now when I try to connect Desktop A to Laptop A or vice-versa by ssh [email protected] I get an error as port 22: No route to host in both the cases. I own both the machines, now if I try same commands from my friend's machine, i.e. via Desktop B, I can access both my Laptop and Desktop. But if I try to access Desktop B from my Laptop or by Desktop I get port 22: Connection timed out I even tried changing ssh port no. in ssh_config file but no success. Note: that 'Laptop A' uses WiFi connection while 'Machine A' uses Ethernet Connection and 'Machine B' is on an entirely different network. Laptop A && Desktop A - Router/Nano_Rcvr provided to me by ISP. So to one Router two Machines are connected and can be accessed at the same time. here is my ifconfig output for both the machines :- Laptop wlan0 Link encap:Ethernet HWaddr X:X:X:X:00:bc inet addr:1.23.73.111 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::219:e3ff:fe04:bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108409 errors:0 dropped:0 overruns:0 frame:0 TX packets:82523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44974080 (44.9 MB) TX bytes:22973031 (22.9 MB) Desktop eth0 Link encap:Ethernet HWaddr X:X:X:X:c5:78 inet addr:1.23.68.209 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::227:eff:fe04:c578/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10380 errors:0 dropped:0 overruns:0 frame:0 TX packets:4509 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1790366 (1.7 MB) TX bytes:852877 (852.8 KB) Interrupt:43 Base address:0x2000

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by Fred
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • IPTABLES syntax help to forward Remote Desktop requests to a VM [CentOS host]

    - by NVRAM
    I've a VM running MSWindows XP hosted on my CentOS 5.4 machine. I can rdesktop into it from the hosting machine and work just fine using the private ddress (192.168.122.65), but I now need to allow Remote Desktop access from other computers (not just the machine hosting the VM). [Edit] I only need to allow access for a day or so, so don't want to add a NIC (for XP activation reasons). Could someone help me with the iptables syntax? The VM is on a private/virtual network: 192.168.122.65 and my CentOS machine is on a physical network, at 10.1.3.38 (and 192.168.122.1 as the GW for the virtual net). I found this question, but none of the answers seemed to work and I'm a bit timid at blindly trying variations. My FORWARD rules are as listed. Thanks in advance. # iptables -L FORWARD Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable RH-Firewall-1-INPUT all -- anywhere anywhere [Edit] If I do play "blindly" is there a simple way to reset the settings on CentOS (a la service network restart)?

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • One workstation gets slow access to the server, but others are fast

    - by Mike Hanson
    I've just setup a machine with Windows Server 2008. It hosts various services, like IIS, POP3, SMTP, Music for Squeezeboxes, VNC. All was working well for the first week or so. One day I needed to create a mapped drive on the server, so it could access files on my workstation. Windows indicated that Network Discovery was needed, so I turned it on with the "Home / Office" option (rather than "Public"). This may be coincidence, but since that time I've been having troubles accessing various services from my main workstation (running Windows 7/64): POP3 continued working correctly, but SMTP was delayed or failed entirely. (Telnet took 20 seconds to connect, but Outlook would never send messages.) VNC failed entirely. I reinstalled it on the server, and now it works but feels sluggish. The music web server was extremely delayed and usually failed. I tried reinstalling, and now it takes about 30 seconds to show the page name on the browser tab, and another 30 seconds to display any page contents. Other machines on the local network seem fine, as do machines connected via the Internet. I don't believe I changed anything on my own machine that would cause this. I considered the possibility that my anti-virus was involved, so I uninstalled AVG (commercial version), but that didn't help. I installed Norton 360 after that, and it didn't complain of viruses on my machine, and the delays remained. Because only my machine is affect, I'm tempted to blame it, except that reinstalling software on the server improved the situation, so there is almost certainly something going on with the server too. The firewall has all the necessary ports open, and it works fine for the other workstations (including external machines connected via the Internet), which indicates that it should be OK. Any ideas?

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • Powershell Get-Process cannot connect to remote computer

    - by amandion
    I've been struggling with this for a few hours and can't figure this out. I have two Windows 7 computers. One is my workstation that is using Powershell to do administrative maintenance. The other is the machine I'd like to use Powershell remoting on to execute remote Powershell cmdlets on. On both computers, I've enabled Powershell remoting and added all computers to TrustedHosts with the * value. On the remote computer, I've started the Remote registry service and ensured that the DCOM, Winmgmt and the Winrm services are running. Firewall is disabled on remote machine too. The cmdlet I try to run is: Get-Process -ComputerName $name Where $name is the name of the remote machine. I keep getting an error saying that it could not connect to the remote PC. I've also tried using the IP and I get the same error. These PCs are not in a domain. I am able to do the following successfully: Invoke-Command {get-Process} -ComputerName $name -Credential $creds Where $name is the machine name and $creds is the user name and password for the remote computer's local Admin account. This gives me the same output I would expect. While this is an acceptable workaround, I am curious, why doesn't using get-process with remoting work as it should? I've seen a few articles on the web suggesting people have had success with it on its own. Each time I am using Powershell on my workstation with elevated privileges. Any ideas?

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • IPTABLES syntax help to forward Remote Desktop requests to a VM [CentOS host]

    - by NVRAM
    I've a VM running MSWindows XP hosted on my CentOS 5.4 machine. I can rdesktop into it from the hosting machine and work just fine using the private ddress (192.168.122.65), but I now need to allow Remote Desktop access from other computers (not just the machine hosting the VM). [Edit] I only need to allow access for a day or so, so don't want to add a NIC (for XP activation reasons). Could someone help me with the iptables syntax? The VM is on a private/virtual network: 192.168.122.65 and my CentOS machine is on a physical network, at 10.1.3.38 (and 192.168.122.1 as the GW for the virtual net). I found this question, but none of the answers seemed to work and I'm a bit timid at blindly trying variations. My FORWARD rules are as listed. Thanks in advance. # iptables -L FORWARD Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable RH-Firewall-1-INPUT all -- anywhere anywhere [Edit] If I do play "blindly" is there a simple way to reset the settings on CentOS (a la service network restart)?

    Read the article

  • Better urls for this internal web server?

    - by sprugman
    I've got a server that I have admin access to, but don't fully manage. (I think it's a virtual machine, but I'm not 100% sure. It's running Apache on Windows Server 2003.) I share the ip with another user, so my sites all have to use the :8080 port. This is kind of ugly. Also, AFAIK, the only access I have is through an ip address. (I'm inside a corporate firewall and don't think I have access to a DNS server or anything.) I've adjusted my hosts file so I don't have to use the ip address on my local machine, but that's not a very generic solution. Are there any options to 1) get rid of the port requirement 2) be able to use a name (maybe a machine name) instead of the ip address in a generic way? (I'm not really a network admin -- I'm a developer managing this machine. The IT folks who really manage it are a few people away from me and tough to get to do anything, so I'm looking for a light-weight solution if possible.)

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • Corsair SSD appears completely blank and does not retain written data

    - by ebanders
    I have a 180GB Corsair SSD (model# CSSD-F180GB2-BRKT) as the primary drive in a Windows laptop. Recently the machine became unbootable after installing Windows updates. Windows installed updates before the machine shut down and the next time the machine boot up it complained about not being able to find a bootable device. After finding fixmbr unsuccessful at making the machine unbootable, I investigated a little within knoppix. Fdisk revealed an empty partition table. A scan by Testdisk came up empty. And finally 'head -c 1024 | hd' reveals all zeros. Creating a primary partition spanning the whole disk completes successfully, but after a reboot the disk appears empty again. dmesg reveals no read or write errors. smartctl indicates that the drive is healthy- although the SMART attribute values do not appear to be read properly. "Data Page | WARNING: PREVIOUS ATTRIBUTE HAS TWO" and "Threshold Page | INCONSISTENT IDENTITIES IN THE DATA" messages appear within the table of values. I don't have much experience with SSDs. Is this drive dead or something? Can anyone recommend any diagnostic tools that may be suited for diagnosing SSDs?

    Read the article

  • Can only ssh when not using wifi

    - by AChrapko
    So I have 3 machines, a windows 7 desktop that is always wired to my router, osX laptop, and raspberry pi running debian linux. My router is a Linksys e1000 wireless N. My goal is to be able to ssh the raspi from any machine, while it is connected via wifi. My problem is that when trying to ssh from either the win7 or osX to the Pi it either times out, or gives an error: "ssh: connect to host 192.168.1.### port 22: No route to host" The only times that I have managed to connect to the pi from any machine were when it connected to the router via an Ethernet cable. Currently with win7 desktop wired, macbook wireless, and pi wireless tests give the following: win7 ping macbook: Destination host unreachable. macbook ping win7: Request timeout. win7 ping pi: Destination host unreachable. macbook ping pi: Request timeout. blah blah blah Plugging the macbook into the router with an Ethernet cable all communication between win7 and macbook works. Pings, ssh, ftp, smb ect... No changes to the pi, still no connections possible to or from any of the other 2 machines. Note All machines, are able to connect to the internet and ssh to the same machine on a completely different network, wired or over wifi. Plugging the Pi in with Ethernet (and macbook still wired) I can ssh to the pi from both win7 and macbook. I can ssh from the pi to macbook. All machines still able to connect the the off network machine. Also another little side note- I was playing warcraft 3 with my roommates the other day, and the only time they were able to see my LAN game was when they were plugged into the router with an Ethernet cable. Once or twice one of the laptops was able to connect over wifi, but not without another computer connecting first via Ethernet. So basically does anyone have any info as to why my router seems to completely ignore local wireless traffic?

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Install KVM based Windows 2008 remotely over SSH on a headless, no graphics Ubuntu 10.04 server?

    - by taazaa
    Hi, I have a Dell server at a remote data center with Ubuntu 10.04 as the host. It is a minimal install with the necessary virtulization packages. There is no X and the machine is headless. I have the win2008 DVD in the machine and want to remotely install it. I tried: virt-install --connect qemu:///system -n vmwin2k8 -r 1024 --disk path=server2k8.qcow2,size=50 --cdrom /dev/sr0 --vnc --noautoconsole --os-type windows --os-variant win2k8 The qcow2 image get created However I don't understand how to connect to see the install via VNC. This is my first time doing it so it may be trivial or may not be possible. Remotely I have a Win 7 machine with Putty and RealVNC viewer. Where is the graphic output of VNC going? Do I have to have VNC server running on the host or some other machine and then connect to it from my VNC client? Please let me know or point me to the right direction. I have been searching the web for several days to figure out how this is suppose to work. Thanks!

    Read the article

  • Connect from Mac OS X to Windows 7 Desktop

    - by jrn
    I am trying to connect from my MacBook to my Windows 7 machine within my own network - if it will work from outside my network that's a plus but no need to have. My Windows 7 machine is freshly installed with Windows 7 Home Premium. It runs the built-in firewall with no settings changed so far as well as Microsoft Security Essentials. So far I tried CoRD and Microsofts Remote Desktop Connections to connect from my Mac to my Windows machine without any success. I did try and disabled the firewall on my Windows machine but could not connect either. The reason I did this was to check wether there is a Windows firewall setting preventing me from connecting. On top of that I manually started the Remote Desktop Services and Remote Desktop Configuration within services.msc. Is there anything else I have to enable for a remote desktop connection? Could there be any router setting I have to tweak? Since I do not want to connect from outside my own network I thought I don't have to do any port forwarding. The error messages I retrieve are all connection timeouts. I can however ping the hostname and/or IP address. Any help would be greatly appreciated. Thanks a lot, jrn

    Read the article

  • What is wrong with my home network? (Routing and connection issues)

    - by David
    I have a corporate laptop that was provided to me by a client and I'm having some rather odd difficulties with it when I put the laptop on my home network. When I first brought the machine home it behaved like any other laptop. Once it was connected to the network it was assigned an IP address and I could remote into it just fine using the machine name. Lately though, whenever I put this laptop on my network I am not able to ping or RDP into the machine as the host name doesn't properly resolve. Additionally I'm able to see the device and it's assigned IP address clearly in my router firmware. This gets even more strange as now when I try to ping it's IP address listed in my router, I see that it's actually trying to ping my own machine (screenshot of this very odd event below). This has actually driven me crazy to the point that I have actually replaced my router (it was behaving oddly in other ways), and I'm continuing to have these problems. The above ping capture is from the new router. As far as network goes I am now currently using an NetGear R7000 Nighthawk and I haven't customized any of the networking settings in the router just yet (installed yesterday). I would appreciate any advice possible and would be happy to provide further diagnostic information. Networking isn't my strong suit, so I'm not even sure where to begin unraveling this thing.

    Read the article

  • Windows 8 doesn't start up after installation

    - by Raj BD
    I have a DELL INSPIRON n5110 machine running Windows 7 Home Premium 64-bit. It has 6GB of RAM and 500 GB hard disk capable to run any Windows and Linux system. And I recently attempted to install Windows 8 in the machine. Installation goes fine and smooth until the final stage when windows is done installing and GETS DEVICES READY. When GETTNG DEVICES READY reaches about 60%, the screen goes blank with the machine running. After a while it reboots. And after the new Windows Logo with dotted circles are done showing up, the screen goes blank again (when we'd expect login screen to show up). There is not even a cursor. I can see the HARD DISK ACTIVITY LED blinking but nothing shows up in the screen. I've tried CLEAN INSTALL several times. The DVD is fine, it installed and worked well in my friends' laptop PCs. I tried installing both PRO version and ENTERPRISE version and both 32-bit and 64-bit. But the problem was same everytime. Finally, I had to reinstall Windows 7 which installed and ran without any problem. The problem, we can guess, is GRAPHICS perhaps. I have Intel SandyBridge Graphics Mobile Chipset (Intel HD Graphics 3000). But if Windows 7 and Linux distrubutions like Mint and BackTrack can run on the machine, why on earth, would Windows 8 not run?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >