Search Results

Search found 8790 results on 352 pages for 'known hosts'.

Page 28/352 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • vSphere - datastore falling off a host

    - by Chadddada
    Recently we have been running the vCheck powershell script daily in order to help in monitoring our vSphere ESX 4.0 environment. One of the oddities that we have been seeing is that some of the datastores on the SAN don't always show up on every host. Our hosts are connected redundantly, via FC, to some brocade FC switches, which then connect via fiber to our EMC Ax4 SAN. While all the datastores are presented to each host we have, and they see them initially, they sometimes seem to fall off and are no longer visible. It easy enough to rescan for datastores and add them back to the hosts the hosts but this seems to be an error. Has anyone else seen this or know why it may be happening? Responses to questions: 1. Is it always the same ESX servers that lose their connection? – Scott Warren No this happens randomly on random hosts. If a VM is running on a particular host, of which the VM's disks are on a SAN datastore, then that datastore won't disappear. It seems to happen if a host doesn't touch a datastore for a bit and it just forgets about it.

    Read the article

  • One dns server in different subnets

    - by hofmeister
    I have installed a small Linux server; the server is in a different subnet as the internet hosts. I added a route to my nat router to create a connection between both subnets. In both subnets I use an extra dhcp Server. Subnet A: 192.168.0.0/26 Subnet B: 192.168.1.0/26 Router: 192.168.0.1, Server in A: 192.168.0.62, Server in B: 192.168.1.62 internet ____ nat router ___ (Sub A)___ internet hosts | |____(Sub B)___ other hosts I could ping every host. Also the hosts which are connected to the subnet b, has internet connection. But sadly I have a problem with the dns server. I use the dnsServer from my nat router, I set the dns Server for subnet b to the ip 192.168.0.1, but every dns entries are equal with the hostname from my linux server. Example if the hostname from the server is test Test 192.168.0.62 //Server subnet a Test-2 192.168.1.62 //Server subnet b Test-2-2 192.168.1.1 //host a Test-2-2-2 192.168.1.2 //host b Any idea what went wrong? The internet dns resolution works fine.

    Read the article

  • Can't reliably ping 6224 router from directly-attached system

    - by David Mackintosh
    OK, here's my situation. This is on the internet. The 6224 is the router in this picture and physically resides in Kanata. Both VLAN 1697 and 3994 are provided by an internet service provider. These VLANs are provided through a single 1Gb ethernet wire. The Kanata hosts are directly attached to the 6224; the other two sites are remote. VLAN 3994 is a single IP address space, so theoretically it shouldn't matter physically where the hosts on that subnet are. Here's the problem. I have a monitoring system which is connected further into the internet, so probes from the monitor would come in to this diagram on the 1697 VLAN. When I ping hosts at Albert or Bells Corners from the internet, there is 0 loss. The connection looks perfect. When I ping hosts at Kanata, I lose anywhere from 10 to 40% of the pings. The loss is not predictable, but: when I do lose them, I always lose at least 3, usually 4, rarely more, pings in a bunch. I have attached a monitor directly to the 6224 in Kanata on 3994.. When the monitor pings the 6224 routing interface, I see exactly the same loss pattern -- but NOT at the same time as the loss from the remote system. Ping time is around 1ms. When the monitor pings another system directly attached to the 6224, there is 0 loss. Ping time is about 0.1ms, one-tenth of the time to ping the router. Anyone know what is going on here?

    Read the article

  • Xen virtual host can reach some sites but not others

    - by Tun H S Lee
    Okay, this is killing me. Debian Squeeze, Xen 4.0, brand new install. No iptables rules whatsoever except for the ones added by the default xen bridge script. Dom0 can reach the entire world, no problems. DomU can receive packets from some hosts, but not from others. For instance, if I ping Host A, it works fine. If I ping Host B, the DomU reports 100% packet loss. The hosts are random, but consistent (even after reboots). I can see no pattern to why some work and others don't. In fact, in some cases, different virtual hosts on the same server (an other server at a different data center) are divided; some work and others do not. I can reboot (DomU or Dom0 too) and the same hosts will work or fail as before. If I tcpdump on the Host B while pinging from the DomU, everything looks fine. It sees the echo request coming in and says it's sending one back. However, if I tcpdump peth0 on the Dom0, it never sees the echo reply. Any ideas what could be happening? I'm tearing my hair out here.

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • Customer site is out of IP addresses, they want to go from /24 to /12 netmask... Bad idea?

    - by ewwhite
    One of my client sites called to ask me to change the subnet masks of the Linux servers I manage there while they re-IP/change the netmask of their network based on a 10.0.0.x scheme. "Can you change the server netmasks from 255.255.255.0 to 255.240.0.0?" You mean, 255.255.240.0? "No, 255.240.0.0." Are you sure you need that many IP addresses? "Yeah, we never want to run out of IP addresses." A quick check against the Subnet Cheat Sheet shows: a 255.255.255.0 netmask, a /24 provides 256 hosts. It's clear to see that an organization can exhaust that number of IP addresses. a 255.240.0.0 netmask, a /12 provides 1,048,576 hosts. This is a small < 200-user site. I doubt that they'd allocate more than 400 IP addresses. I suggested something that provides fewer hosts, like a /22 or /21 (1024 and 2048 hosts, respectively), but was unable to give a specific reason against using the /12 subnet. Is there anything this customer should be concerned about? Are there any specific reasons they shouldn't use such an incredibly large mask in their environment?

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • Can't connect to MySQL server on '127.0.0.1' + Postfix

    - by Andrew Dakin
    I just installed Postfix and configured it to use MySQL. It wasn't sending any emails out after I did that so I checked /var/log/mail.log and it came back with this: postfix/trivial-rewrite[5283]: fatal: proxy:mysql:/etc/postfix/mysql-domains.cf(0,lock|fold_fix): table lookup problem postfix/cleanup[5258]: warning: AFCDC30437: virtual_alias_maps map lookup problem for [email protected] postfix/master[4761]: warning: process /usr/lib/postfix/trivial-rewrite pid 5282 exit status 1 postfix/proxymap[4126]: warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) In mysql-domains.cf I'm using: Hosts 127.0.0.1 I can connect to MySQL with this: mysql -u postfixuser -p But I can't connect this way: mysql -u postfixuser -h 127.0.0.1 -p maildbname Also when I run netstat -l it comes back with: tcp 0 0 localhost:mysql *:* LISTEN I've tried changing my hosts to: Hosts localhost But then I just get a socket error: postfix/cleanup[4870]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' I also have this set up in the MySQL config file: bind-address = 127.0.0.1 I'm sure I'm missing something obvious, but I am pretty new to all this. Thanks! Andy

    Read the article

  • Tomcat6 host-manager

    - by Tom
    Halo Gurus I have a problem with setting a new hosts in Tomcat6 host-manager application. Host-manager application works very well. Everything works. But when restart the server, all settings are lost. I have to everything set up again. 1.I start Tomcat6 host-manager http://localhost:8080/host-manager/html 2.Set up Hosts 3.Everything works 4.Restart server /sbin/service restart tomcat6 6.The settings are lost. There are not Hosts. I have to all set up again. goto 1 note: I use CentOS5 and Tomcat6 Thanks a lot Tom

    Read the article

  • How do I access site.project.rails (running on host) from VMWare fusion?

    - by Johnny Mnemonic
    I have a rails app setup and running on my snow leopard MacBook - the app is being served by Passenger. As part of the setup they had me add entries for 127.0.0.1 site.project.rails in my hosts file so I could reach the site from site.project.rails I can't for the life of me figure out how to get the app show up in VMWare. I have XP setup and browse to http://site.project.rails and I can't get it to show up. I setup a basic rails app, being served at localhost:3000 by webrick, I can get that to load by visiting my hosts ip (http://192.168.1.1:3000/). I added the same hosts I added on my Mac to Windows. I also Bridged the network under settings for the VM. What am I missing?

    Read the article

  • Does anyone know about nagios plugin that uses nmap and does port checking??

    - by Eedoh
    Hi to all. I need to monitor open and closed ports on dozens of hosts. I've found a Nagios plugin that does what I need, but I would have to use this script through nrpe. Some of the hosts are powered by linux and they all have perl installed. But some of them are Windows machines, and it's not convenient for me to install perl on every one of them. That's why I can not use this plugin. I hope that there's Nagios plugin that uses nmap, or something similar, so it could check ports on every host remotely, without installing plugins on remote hosts, only on server.

    Read the article

  • Launch elasticsearch dockerfile using my own elasticsearch.yml

    - by Kevin
    I am launching elasticsearch via a dockerfile found here: https://index.docker.io/u/ehazlett/elasticsearch/ It works great. I need to define my own hosts as my environment does not support multicast of any kind. I understand that my options are: 1) supply hosts when elasticsearch is run as a command line parameter 2) modify my elasticsearch.yml file to set the hosts. I know how to build the yml, what I need to know is how to launch elasticsearch via docker using my own yml instead of the one in the container. Is that possible? Thanks.

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • Shared FC LVM VG with LVs for each KVM VM. Clvm required?

    - by Cocoabean
    I have 2 virtual machine hosts running Ubuntu 12.04 and KVM managed with libvirt. They are both connected to the same VG which is a LUN on my SAN over FC. I provision LVs on this shared VG for each VM. I don't think I need HA or failover, but I do want live migration between the hosts. Do I need clvm in this case? As long as I don't try to start the same VM on each host should this work? Clvm requires lots of overhead with clustering tools that I don't think I need. I can deal with manually restarting VMs on other hosts in the event of a hardware failure.

    Read the article

  • System Monitoring Redundancy

    - by Josh Brower
    I consult in a small business environment where I have two HyperV hosts (with <10 VMs) + a couple other servers. I recently had an issue where one of the HyperV hosts had a CPU issue and it came down, bringing most of my non-critical VMs with it, plus a free piece of software that I use for network & system monitoring and availability. Because of this, and the fact that iDRAC locked up to, I did not get any alerts about the crash. So I am wondering how I can (cheaply) get a redundant availability monitoring system in place--Is is as simple as running Nagios or Zenoss (or whatever) on two different HyperV hosts? It just seems like running more than one copy of Nagios/Zenoss/etc could be expensive and have high overhead. Thoughts? Thanks! -Josh

    Read the article

  • nagios contact groups to check_mk

    - by Skiaddict
    I have Nagios installed with traditional configuration files. I have created some contact groups and assigned them to hosts. For web UI I'm using check_mk. And here's the question: Check_mk supports showing hosts/services based on contact group membership. But I can't use the Nagios contact groups in check_mk. (Result should be that if person XYZ is logged in, he see only hosts and services assigned to him.) My users are in LDAP (I'm using check_mk login form, not apache authorisation). I can't find any information about this in documentation so if someone have experience, please tell me how this works. The problem is that I cannot let everybody be admin and receive all alerts...

    Read the article

  • How to configure Windows Server 2008 DHCP to supply unique subnet to a remote site?

    - by caleban
    The Main site hosts the only Windows Server. Windows Server 2008 R2 Domain Controller running AD, DNS, DHCP, Exchange 2007. Remote site has no Windows server. Main site subnet is 192.168.1.0/24 Remote site subnet is 192.168.2.0/24 The Windows Server at Main site is supplying 192.168.1.0/24 via DHCP to hosts at the local site where it resides. Is it possible to configure that Windows Server to supply 192.168.2.0/24 to hosts at the Remote site and if so how? We could use the Cisco router at the Remote site to supply DHCP but if possible we'd like to use the Windows Server at the Main site to supply DHCP.

    Read the article

  • Browser not parsing PAC file properly?

    - by mfinni
    I have a long PAC file. The browser(s) (IE and Chrome) are configured to use it and it generally does what it says on the tin. I have a domain that continues to go through the proxy although it should be going direct. // Match specific hosts and IPs entered as hosts if (buncha stuff || shExpMatch(host,"(*.newmarketinc.com)") || shExpMatch(host,"(newmarketinc.com)") || buncha stuff ) return "DIRECT"; Pactester shows that anything in the domain should be direct. h:\pacparser\pactester.exe -p h:\pacfile -u http://daas.newmarketinc.com DIRECT But we continue to pass traffic to hosts in this domain via the proxy. Wireshark and Fiddler both show this. How do i figure out how my browser has gotten brain-damage? Traffic to other sites in this stanza does properly go direct, as confirmed by Fiddler and Wireshark.

    Read the article

  • How can browsers in VMs resolve hostnames of websites on parent PC?

    - by elliot100
    I have a number of local websites in development on my Windows PC, set up as virtual hosts within Apache, with hostnames (along the lines of dev.example.com) resolved via the hosts file, so I can test them out them with various browsers. I now want to extend browser testing to running browsers in various OSs in virtual machines, and want to be able to resolve dev.example.com from the VMs. Currently these are a mix of VMWare Server and VirtualPC. I know I can edit the hosts file on any Windows VMs, but this is a bit fiddly and I'd like a solution which is independent of the individual VMs. I think what I need is a nameserver, but what's the simplest way of going about this? I'd like everything to be self-contained on the one machine. I think I can cover firewall and Apache permissioning issues.

    Read the article

  • switch duplicates packets and forward in two route

    - by sami
    there is a network including a router, two hosts and a switch which connects hosts to router. i have a virtual machine on my system. the network adapter is set to act as bridge. so the virtual machine and real OS are my 2 hosts on different LAN. they use one network card and are connected to a switch. when each of host send a packet to the other one, the switch duplicate the packet and forward it to both router and the other host. how can I solve the duplicate packet problem? Thanks.

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • Parallels: How to see a Mac-hosted website from Windows?

    - by Jim Miller
    I'm traveling at the moment, and have moved one of the websites I'm working on to my MBP so I can work on it without a network connection. I've made an addition to the Mac's /etc/hosts file pointing the domain name to 127.0.0.1, and all's well. I now want to get into Parallels and check the site from Windows browsers. How do I get things so that the Windows browser will understand the domain name and access the site? The Windows image obviously doesn't recognize / can't find the Mac's /etc/hosts file, and references to 127.0.0.1 in the Windows hosts file just as obviously point to Windows, not the Mac. Any advice out there? Thanks!

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • How do I map a friendly name (e.g. www.example.com) to 127.0.0.1:port# on Mac OS X

    - by Fred Finkle
    I am trying to create a demo for a class of mine and I want to configure "fake" domain names on my laptop. A previous question "Can I specify a port in an entry in my /etc/hosts on OS X?" contained an answer indicating that to do it you must use /etc/hosts plus changes to the iptables "If OS X uses iptables you could point xyz.com to some ip in the hosts file like 157.166.226.25 and then: sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -d 157.166.226.25 -j DNAT --to-destination 127.0.0.1:3000 " Since OS X doesn't use iptables, how do I do the equivalent using the tools available on OS X? (the original "asker" seemed to know how to do this, so it wasn't explained). Thanks in advance.

    Read the article

  • Cannot authenticate to SBS 2003

    - by Lerp
    I am trying to connect my machine to my work's entirely windows network and I am having a few issues: Whenever I try to access the server, the authentication dialog just keeps popping back up. I cannot connect to the printers (it says connecting to device failed) I have tried setting up samba, winbind, kerberos, likewise open all to no avail. I have a feeling I am just setting them up wrong. My nautilus shows this when I go to Network Windows Network MASTERMAGNETS I can ping both MASTERMAGNETS.LOCAL and 192.168.0.2 after modifying my /etc/hosts james@jamesmaddison:~$ cat /etc/hosts 127.0.0.1 localhost jamesmaddison 192.168.0.2 MASTERMAGNETS.LOCAL 192.168.0.50 Sharp-Printer # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters I believe that's the correct domain (not sure if that's the correct term) as when I do nslookup MASTERMAGNETS.LOCAL I get the following: james@jamesmaddison:~$ nslookup MASTERMAGNETS.LOCAL Server: 192.168.0.2 Address: 192.168.0.2#53 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.3 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.2 It all worked fine before I reinstalled Ubuntu and now I just cannot get access to the server. All help is appreciated, I need to get this working or I fear I will be forced to develop in a windows environment :(

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >