Search Results

Search found 7845 results on 314 pages for 'connected'.

Page 281/314 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • How to connect SAN from CentOS through two iSCSI Targets

    - by garconcn
    I had asked the similar question before. This time I want to use subnet for two iSCSI Targets, hence I start a new question. I have an old Promise VTrak M500i SAN Server. It comes with 2 iSCSI ports. I want to connect to two LUNs on the SAN server through two separate Targets from CentOS 5.7 64bits server. My network setup is as follows: CentOS server: Management network - 192.168.1.1 Storage network 1 - 192.168.5.2 Storage network 2 - 192.168.6.2 Promise SAN server: Management network - 192.168.1.2 iSCSI Port 1 - 192.168.5.1 iSCSI Port 2 - 192.168.6.1 I have two Logical Drives on this SAN and they are mapped as follows: Index Initiator Name LUN Mapping 0 iqn.2011-11:backup (LD0,0) 1 iqn.2011-11:template (LD1,1) Basically, I want the traffic to iqn.2011-11:backup LUN 0 through 192.168.5.1 network the traffic to iqn.2011-11:template LUN 1 through 192.168.6.1 network I don't use MPIO, just want to separate the traffic to avoid traffic jam. How do I achieve this? I am new to SAN stuff, please explain as much detail as you can. Thank you. The following are what I am doing now. After mapping the LUN to my pre-defined Initiators, the CentOS server can discover both Targets. [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.5.1 192.168.5.1:3260,1 iscsi-1 192.168.6.1:3260,2 iscsi-1 [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.6.1 192.168.6.1:3260,2 iscsi-1 192.168.5.1:3260,1 iscsi-1 [root@centos ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] Logging in to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] Login to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] successful. Login to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] successful. [ OK ] [root@centos ~]# iscsiadm -m session tcp: [1] 192.168.6.1:3260,2 iscsi-1 tcp: [2] 192.168.5.1:3260,1 iscsi-1 When I check the LUN mapping on the SAN server for the two Logical Drives, both LUNs are connected through Port0-192.168.5.2 with the Initiator defined in CentOS. Assigned Initiator List: Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port1-192.168.5.2 1 I assume the following is what I want: Initiator Name Alias IP Address LUN iqn.2011-11.backup centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.template centos.mydomain.com Port0-192.168.6.2 1

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • [Windows 7] Certain Programs cannot access internet

    - by Cindy
    Operating System: Windows 7 (x64) Problem: Certain Programs are unable to access the internet. They claim that there is no connection when you already are connected. Hello, before we start. Just letting you know I'm new here, and I'm very new to Windows 7. I installed it two days ago. I just installed Windows 7 on my laptop and I have a few problems. I play World of Warcraft, as well as a variety of games. And when I first attempt to log into the game, I get a windows error message, but it doesn't stop there. I thought World of Warcraft got corrupted during the upgrade. It seems that I am unable to access the internet from other online games as well. Most say in along the lines of "Cannot connect to patch server, try again later." I cannot use a downloader Also, I have internet explorer. The x32 version of the browser cannot connect to the internet, and when I try to enter "google.com", it says the same thing. I'm only accessing this site through Internet Explorer x64, which I would have been fine with is it's compatible with Adobe Flash. The only thing that seems to connect to the internet are Internet Explorer x64 and Windows Live Messenger. Here are the steps I have taken, but none worked. 1.) Disable Windows Firewall 2.) Have Windows Firewall Enabled, but allow the specific programs to access internet. And allowed all incoming access. 3.) Disabled UAC, Ran the programs as an admin, and set compatibility to Vista. 4.) Uninstalled an anti-virus program. (McAffee Security Suite 2010) 5.) Reinstalled the programs 6.) Reinstalled Windows 7 7.) Retaken the steps on the Administrator account. Please assist me in this problem. I need to get back into the game. Thanks so much in advance.

    Read the article

  • DNSBL listed at zen.spamhaus.org - cant get outgoing mail working? Am I interpreting the response correctly?

    - by Joe Hopfgartner
    I have problem with a mailserver and there is something I kind of not understand! I can connect, authenticate, specify the sender address - but when specifying the reciever i get a error 550 which looks like so: RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 Now the strange thing is that 62.178.15.161 is my local client address. Not the servers ip address. Also the error code 550 seems to be defined as so: 550 Requested action not taken: mailbox unavailable To me that makes totally no sense. Why this error code with this spamhaus message? Why the local ip adress and not the servers? There is exim running and there is nothing turning up in the logs mail.err mail.info mail.log mail.warn in /var/log I looked up both the servers and the clients ip adress on blacklists. The clients ip adress is listed on some (as expected), but the server is totally clean. Here is the complete telnet log when I reproduced the error. Mail clients like Evolution and Thunderbird give me the same spamhaus error message. joe@joe-desktop:~$ telnet mail.hunsynth.org 25 Trying 193.164.132.42... Connected to mail.hunsynth.org. Escape character is '^]'. 220 hunsynth.org ESMTP Exim 4.69 Sat, 01 Jan 2011 17:52:45 +0100 HELP 214-Commands supported: 214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA NOOP QUIT RSET HELP EHLO AUTH 250-hunsynth.org Hello chello062178015161.6.11.univie.teleweb.at [62.178.15.161] 250-SIZE 52428800 250-PIPELINING 250-AUTH PLAIN LOGIN CRAM-MD5 250-STARTTLS 250 HELP AUTH LOGIN 334 VXNlcm5hbWU6 dGVzdEBodW5zeW50aC5vcmc= 334 UGFzc3dvcmQ6 ***** 235 Authentication succeeded MAIL FROM:[email protected] 250 OK RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 quit 221 hunsynth.org closing connection Connection closed by foreign host. joe@joe-desktop:~$ Update: I tried the same thing from my other server and could successfully send an email. So it really looks like the server does check the IP wich establiches the connection is in some blacklist. This is theoretically a good thing - but - the authentication on the server should prevent that? Or shouldn't it? Well I just think it would be absurd if I couldn't send email over my smtp server from my dynamic ISP connection because the dynamic is listed, altough i have a clean server with login?

    Read the article

  • Directory listing through FTPS (TLS) is not working

    - by Aron Rotteveel
    We recently switched our server to require TLS for every connection. This is working flawlessly so far, but one of our clients is having problems. Some facts: Server uses Pure-FTPD Server has a passive port range configured Server has no firewall limitations regarding the FTP Client uses WS FTP Client is behind a router Client connects to the same IP as every other, using PASSIVE mode All other clients have no trouble connecting Because of the TLS requirement, connecting using ACTIVE mode is almost not possible, but PASSIVE is working fine for everyone except this specific client. It seems that he is able to connect, but once a LIST command is performed, things go wrong. Log: Finding Host <clienthost> ... Connecting to <serverip:21> Connected to <serverip:21> in 0.020000 seconds, Waiting for Server Response Initializing SSL Session ... 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 5 of 50 allowed. 220-Local time is now 22:14. Server port: 21. 220-This is a private system - No anonymous login 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. AUTH TLS 234 AUTH TLS OK. SSL session NOT set for reuse SSL Session Started. Host type (1): Automatic Detect USER <user> 331 User <user> OK. Password required PASS (hidden) 230-User <user> has group access to: <user> 230 OK. Current restricted directory is / SYST 215 UNIX Type: L8 Host type (2): Unix (Standard) PBSZ 0 200 PBSZ=0 PROT P 200 Data protection level set to "private" PWD 257 "/" is your current location CWD /public_html 250 OK. Current directory is /public_html PWD257 "/public_html" is your current location TYPE A 200 TYPE is now ASCII PASV 227 Entering Passive Mode (<serverip>,132,100) connecting data channel to <serverip>:132,100(33892) Substituting connection address <serverip> for private address <serverip> from PASV Using external address <customer ext. ip> instead of local address <customer int. ip> for PORT command PORT 82,161,56,225,195,181 200 PORT command successful LIST Error reading response from server. It appears that the connection is dead. Attempting reconnect... Any help is appreciated.

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • iPhone 3.1 Black Screen

    - by churnd
    Since I've updated my 3G iPhone to 3.1, it will become unresponsive after ~4-5 hours. It looks like it's off, but it's not. None of the buttons do anything that I can see. The only way I can get a screen back is to do a hard reset (Power + Home). Has anyone else had this problem? What can I do to fix it? I've had it 3 happen 3 times already. Very annoying. Apparently, the Discussions forum on Apple.com also shows lots of people are having the same problem. One guy said he did a restore and that fixed it for him, which I haven't tried yet because many others also tried that and it didn't help. A few others have said turning off WIFI fixes it for them, so I'm going to try that today. Another thing I've noticed is that now, when I plug in my iPhone to my Macbook Pro, iTunes 9 does not launch. It used to before the update. ** Further update and possible solution ** I left my iPhone plugged into my Macbook Pro yesterday while at work, and didn't use it much throughout the day. I noticed it didn't lock up once. I suspect this was because it was connected to a power source. I continued to use my iPhone with no problem yesterday and today. Still no lockups as of now. I personally think this was Spotlight and/or Genius "doing it's thing" from a new install. Kinda like an OS upgrade on your Mac... Spotlight has to rebuild itself and those first couple of hours can be kinda sluggish. This would be even more so on the iPhone where processing power is limited, and I'm sure the processor is clocked down a bit while on battery power, which would further amplify the problem. Again, just my guess. My iPhone is working fine now. ** Final Solution ** Update to 3.1.2. Problem solved.

    Read the article

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • netsh wlan add profile not importing passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings or am I missing something about adding profiles? Here is the exported xml with personal information removed: <?xml version="1.0"?> <WLANProfile xmlns="http://www.microsoft.com/networking/WLAN/profile/v1"> <name>[removed]</name> <SSIDConfig> <SSID> <hex>[removed]</hex> <name>[removed]</name> </SSID> <nonBroadcast>false</nonBroadcast> </SSIDConfig> <connectionType>ESS</connectionType> <connectionMode>auto</connectionMode> <autoSwitch>false</autoSwitch> <MSM> <security> <authEncryption> <authentication>WPAPSK</authentication> <encryption>TKIP</encryption> <useOneX>false</useOneX> </authEncryption> <sharedKey> <keyType>passPhrase</keyType> <protected>true</protected> <keyMaterial>[removed]</keyMaterial> </sharedKey> </security> </MSM> </WLANProfile> Any help or advice is appreciated. Thanks.

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • Windows Vista: Networking can only connect "local only"

    - by Damien
    I am attempting to debug a problem on a Windows Vista laptop - not mine! Until just recently (last week or so), it was operating normally for about 4 years :) The problem is that I am having issues connecting to the local network (a basic wireless home router; more later) and the internet (via ADSL). This is both for wired [Broadcom chipset] and wireless [Intel chipset]. I will elaborate further later. To detail the network. I have three other clients (HTC phone, Ubuntu 12.04 desktop [wired] and Ubuntu 10.04 laptop [wireless]), all of whom are able to connect to the network and internet normally. A windows 7 virtual machine running on said desktop connects normally. I have tried two different wireless routers - Netgear DG834G and Netgear DGN3500. The same error mode is common to both. Updating the firmware to the latest on both routers does not help. Overall, it seems safe to say it's localised to the laptop in question. I do not have another Vista client to test with. The specific symptoms are as follows: When "connected", it says "Local Only", and says it cannot connect to the internet. This is true for both wired and wireless. It can get an IP address (192.168.0.5), and the router (192.168.0.1) reports that it can see the device. When I try to ping, I get the following results: ping 192.168.0.1 - (router) all packets lost ping 192.168.0.5 - (laptop's address) OK ping 192.168.0.4 - (desktop) all packets lost Pinging from the desktop to the problematic laptop results in "From 192.168.0.4 icmp_seq=1 Destination Host Unreachable" The most promising "fix" from trawling forums is KB928233 which does not work for me. The problem is persistent across reports (both full shutdown and hibernate) so it appears not to be sleep related. I am not a regular vista user, though I can fumble my way about a bit. Is there any other suggestions as to what I should do? Is there any further information I can provide?

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • Why is site serving different SSL certs to different browsers?

    - by TRiG
    The SSL certificate on menswearireland.com and on www.menswearireland.com works fine on Safari, Chrome, SeaMonkey, K-Meleon, QtWeb, Firefox, and Opera. However, Internet Explorer claims that there is an error: The security certificate presented by this website was not issued by a trusted certificate authority. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0) Another site hosted on the same managed server shows no errors: achill-fieldschool.com and www.achill-fieldschool.com work fine on IE, even though as far as I can tell the certificate is set up identically. What am I doing wrong? This is a LAMPP server running Plesk. It looks like the server is showing different certificates to different clients. To some clients it shows a RapidSSL certificate made out to www.menswearireland.com with menswearireland.com as a valid alternative name. To other clients, it shows a Parallels Panel certificate, made out to Parallels Panel. Here are results from a few different online SSL checkers: most say it's fine, while two show errors. Three online checkers say it's valid Comodo SSL Check shows it as valid DigiCert SSL Check shows it as valid SSL Shopper SSL Check shows it as valid Common name: www.menswearireland.com SANs: www.menswearireland.com, menswearireland.com Valid from October 2, 2012 to November 4, 2013 Serial Number: 559425 (0x88941) Signature Algorithm: sha1WithRSAEncryption Issuer: RapidSSL CA Another online checker seems to see a completely different certificate GeoCerts SSL Check shows it as invalid Common name: Parallels Panel Organization: Parallels Valid from August 15, 2012 to August 15, 2013 Issuer: Parallels Panel Another online checker sees more than one certificate Symantic SSL Check shows it as invalid The certificate installation checker connected to the Web server and read its certificates, but could not determine which is the primary certificate for the Web server. Incidentally, on both menswearireland.com and achill-fieldschool.com the homepage will redirect from HTTPS to HTTP. To see SSL details, visit the page /account on both (that page will redirect from HTTP to HTTPS). I’ve found more information in a more detailed online SSL checker. https://www.ssllabs.com/ssltest/analyze.html?d=menswearireland.com This site works only in browsers with SNI support My understanding is that SNI (RFC 6066) is a method for putting many SSL sites on one shared IP address and port. This does not work on Internet Explorer on older versions of Windows (this has to do with the version of Windows, not the version of Internet Explorer). However, all our SSL sites are on a unique IP address, so we shouldn’t need SNI.

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • Internal/external DNS with subdomains

    - by ScottMcGready
    I've got an internal DNS server (part of OS X server) and it's acting as the main DNS server for a specific (physical) site. When it can't resolve hostnames itself, it forwards requests to Google's DNS servers. Everything works well apart from a couple of issues, which I think may be related but can't get to the bottom of. I've got a number of intranet sites setup, that people can access by going to something like: intranet.mydomainname.com selfservice.mydomainname.com These point to various servers in the building that host these sites. Whether internal or external (without VPN), I can access these sites just dandy. Where the issue comes is when I want to host, say, test.mydomainname.com on an external server it fails to resolve as the primary zone for mydomainname.com is internal. How can I get it to look up Google's DNS (or an external one) for that zone if it's not in the list? I've tried everything I can think (adding my host's nameservers etc) of but nothing seems to work fully. Also I can't access intranet sites when connected via VPN and from what I can gather - I believe this might be related to the DNS issue but just wanted to give as much information as possible. Edit The domain mydomainname.com is hosted externally and pointed at the site's public IP. From there we can forward the requests to the relevant internal server. Externally everything works, internally though any subdomain of mydomainname.com is served locally, I want it to be served from Google's DNS / externally. DNS Configuration As per a request, here's the current DNS configuration (OS X server's DNS tab). I've blurred out the .private address as it's not really relevant but it's the server's name. The colored dots are just there to link everything together. Screenshot: In an attempt to clarify this is what I want: intranet.mydomain.com -> 192.168.0.12 selfservice.mydomain.com -> 192.168.0.13 *.mydomain.com -> forward to external DNS mydomain.com -> forward to external DNS At the moment any subdomain of mydomain.com is not forwarded on (think this is because of the primary zone being mydomain.com with a NS of intranet.mydomain.com but could do with a little nod in the right direction.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >