Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 32/163 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Multiple SSL Certificates Running on Mac OS X 10.6

    - by frodosghost.mp
    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • How do I access site.project.rails (running on host) from VMWare fusion?

    - by Johnny Mnemonic
    I have a rails app setup and running on my snow leopard MacBook - the app is being served by Passenger. As part of the setup they had me add entries for 127.0.0.1 site.project.rails in my hosts file so I could reach the site from site.project.rails I can't for the life of me figure out how to get the app show up in VMWare. I have XP setup and browse to http://site.project.rails and I can't get it to show up. I setup a basic rails app, being served at localhost:3000 by webrick, I can get that to load by visiting my hosts ip (http://192.168.1.1:3000/). I added the same hosts I added on my Mac to Windows. I also Bridged the network under settings for the VM. What am I missing?

    Read the article

  • Does anyone know about nagios plugin that uses nmap and does port checking??

    - by Eedoh
    Hi to all. I need to monitor open and closed ports on dozens of hosts. I've found a Nagios plugin that does what I need, but I would have to use this script through nrpe. Some of the hosts are powered by linux and they all have perl installed. But some of them are Windows machines, and it's not convenient for me to install perl on every one of them. That's why I can not use this plugin. I hope that there's Nagios plugin that uses nmap, or something similar, so it could check ports on every host remotely, without installing plugins on remote hosts, only on server.

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • Launch elasticsearch dockerfile using my own elasticsearch.yml

    - by Kevin
    I am launching elasticsearch via a dockerfile found here: https://index.docker.io/u/ehazlett/elasticsearch/ It works great. I need to define my own hosts as my environment does not support multicast of any kind. I understand that my options are: 1) supply hosts when elasticsearch is run as a command line parameter 2) modify my elasticsearch.yml file to set the hosts. I know how to build the yml, what I need to know is how to launch elasticsearch via docker using my own yml instead of the one in the container. Is that possible? Thanks.

    Read the article

  • How can I password prompt certain IPs and allow all others free access using Apache?

    - by Moak
    SOLVED: The idea is that if the visitor comes from China they have to pass a basic authentication. If you have any other IP address you can visit the site without being hassled (including proxies) //1400 rules.... SetEnvIf Remote_Addr 222.249.128.0/19 china SetEnvIf Remote_Addr 222.249.160.0/20 china SetEnvIf Remote_Addr 222.249.176.0/20 china AuthType Basic AuthName "Restricted" AuthUserFile /www/passwd/users Require valid-user Order allow,deny Allow from All Deny from env=china Satisfy any

    Read the article

  • Shared FC LVM VG with LVs for each KVM VM. Clvm required?

    - by Cocoabean
    I have 2 virtual machine hosts running Ubuntu 12.04 and KVM managed with libvirt. They are both connected to the same VG which is a LUN on my SAN over FC. I provision LVs on this shared VG for each VM. I don't think I need HA or failover, but I do want live migration between the hosts. Do I need clvm in this case? As long as I don't try to start the same VM on each host should this work? Clvm requires lots of overhead with clustering tools that I don't think I need. I can deal with manually restarting VMs on other hosts in the event of a hardware failure.

    Read the article

  • .htaccess to restrict access to only select files

    - by Bryan Ward
    I have a directory in my webserver for which I would like to serve up only pdf files. I found I can restrict access using the .htaccess, and using something like <FilesMatch "\.(text,doc)"> Order allow,deny Deny from all Satisfy All </FilesMatch> to serve up everything except a regular expression. Is it possible to instead restrict access to only files which meet some regular expression?

    Read the article

  • System Monitoring Redundancy

    - by Josh Brower
    I consult in a small business environment where I have two HyperV hosts (with <10 VMs) + a couple other servers. I recently had an issue where one of the HyperV hosts had a CPU issue and it came down, bringing most of my non-critical VMs with it, plus a free piece of software that I use for network & system monitoring and availability. Because of this, and the fact that iDRAC locked up to, I did not get any alerts about the crash. So I am wondering how I can (cheaply) get a redundant availability monitoring system in place--Is is as simple as running Nagios or Zenoss (or whatever) on two different HyperV hosts? It just seems like running more than one copy of Nagios/Zenoss/etc could be expensive and have high overhead. Thoughts? Thanks! -Josh

    Read the article

  • Enabled Apache mod_status but server-status was not found on SUSE enterprise 11 SP1

    - by Charles Yeung
    In /etc/apache2/httpd.conf, I have remove the line of Include mod_status and add the following to the last line, LoadModule status_module /usr/lib/apache2/mod_status.so ExtendedStatus On <Location /server-status> SetHandler server-status AllowOverride None Order Deny,Allow Deny from all Allow from all </Location> Then I restart Apache, and go to http://HOSTNAME/server-status, but I get the page not found, Did someone know why I get page not found? Is there any more step needed to do to see the Apache status? Thanks

    Read the article

  • htaccess allow if does not contain string

    - by Tom
    I'm trying to setup a .htaccess file which will allow users to bypass the password block if they come from a domain which does not start with preview. e.g. http://preview.example.com would trigger the password and http://example.com would not. Here's what I've got so far: SetEnvIfNoCase Host preview(.*\.)? preview_site AuthUserFile /Users/me/.htpasswd AuthGroupFile /dev/null AuthType Basic AuthName "Development Area" Require valid-user Order deny,allow Allow from 127 deny from env=preview_site Satisfy any Any ideas?

    Read the article

  • nagios contact groups to check_mk

    - by Skiaddict
    I have Nagios installed with traditional configuration files. I have created some contact groups and assigned them to hosts. For web UI I'm using check_mk. And here's the question: Check_mk supports showing hosts/services based on contact group membership. But I can't use the Nagios contact groups in check_mk. (Result should be that if person XYZ is logged in, he see only hosts and services assigned to him.) My users are in LDAP (I'm using check_mk login form, not apache authorisation). I can't find any information about this in documentation so if someone have experience, please tell me how this works. The problem is that I cannot let everybody be admin and receive all alerts...

    Read the article

  • How to configure Windows Server 2008 DHCP to supply unique subnet to a remote site?

    - by caleban
    The Main site hosts the only Windows Server. Windows Server 2008 R2 Domain Controller running AD, DNS, DHCP, Exchange 2007. Remote site has no Windows server. Main site subnet is 192.168.1.0/24 Remote site subnet is 192.168.2.0/24 The Windows Server at Main site is supplying 192.168.1.0/24 via DHCP to hosts at the local site where it resides. Is it possible to configure that Windows Server to supply 192.168.2.0/24 to hosts at the Remote site and if so how? We could use the Cisco router at the Remote site to supply DHCP but if possible we'd like to use the Windows Server at the Main site to supply DHCP.

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • Browser not parsing PAC file properly?

    - by mfinni
    I have a long PAC file. The browser(s) (IE and Chrome) are configured to use it and it generally does what it says on the tin. I have a domain that continues to go through the proxy although it should be going direct. // Match specific hosts and IPs entered as hosts if (buncha stuff || shExpMatch(host,"(*.newmarketinc.com)") || shExpMatch(host,"(newmarketinc.com)") || buncha stuff ) return "DIRECT"; Pactester shows that anything in the domain should be direct. h:\pacparser\pactester.exe -p h:\pacfile -u http://daas.newmarketinc.com DIRECT But we continue to pass traffic to hosts in this domain via the proxy. Wireshark and Fiddler both show this. How do i figure out how my browser has gotten brain-damage? Traffic to other sites in this stanza does properly go direct, as confirmed by Fiddler and Wireshark.

    Read the article

  • Issues with configuration of Apache and mod_auth_sspi

    - by TekiusFanatikus
    I've been able to get this working using XAMP with Apache 2.0.55 and XAMP Apache 2.2.14 without any problems. However, when I attempt to configure our intranet server (Apache 2.0.59), I don't get the same results. The results are that the following variables contain the information desired: $_SERVER["REMOTE_USER"] AND $_SERVER["PHP_AUTH_USER"]. In this case, they are blank. I'm expecting "domain/user_name". Conf file stuff: <Directory "/xxx/xampp/htdocs/"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "xxxx" SSPIOfferBasic Off SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory> I would like to note that I've modified the paths to reflect the intranet environment. I'm using the following module: http://sourceforge.net/projects/mod-auth-sspi/ Once the module is installed and the conf file is modified, the intranet environment's server scope isn't populated with the expected variables. Edit #1 <Directory "/path_here"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "domain_here" SSPIOfferBasic On SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory>

    Read the article

  • Multiple SSL Certificates Running on Mac OS X 10.6

    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • switch duplicates packets and forward in two route

    - by sami
    there is a network including a router, two hosts and a switch which connects hosts to router. i have a virtual machine on my system. the network adapter is set to act as bridge. so the virtual machine and real OS are my 2 hosts on different LAN. they use one network card and are connected to a switch. when each of host send a packet to the other one, the switch duplicate the packet and forward it to both router and the other host. how can I solve the duplicate packet problem? Thanks.

    Read the article

  • How can browsers in VMs resolve hostnames of websites on parent PC?

    - by elliot100
    I have a number of local websites in development on my Windows PC, set up as virtual hosts within Apache, with hostnames (along the lines of dev.example.com) resolved via the hosts file, so I can test them out them with various browsers. I now want to extend browser testing to running browsers in various OSs in virtual machines, and want to be able to resolve dev.example.com from the VMs. Currently these are a mix of VMWare Server and VirtualPC. I know I can edit the hosts file on any Windows VMs, but this is a bit fiddly and I'd like a solution which is independent of the individual VMs. I think what I need is a nameserver, but what's the simplest way of going about this? I'd like everything to be self-contained on the one machine. I think I can cover firewall and Apache permissioning issues.

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • Django | Apache | Deploy website behind SSL

    - by planet260
    So here are my requirements. I have a website built in Django. I deployed it on Apache Ubuntu. Before there was no SSL involved so the deployment was pretty simple. But now the requirements are changed. Now I have to take a few actions like signup and login behind SSL and present the admin panel and other normally via HTTP. By following the this tutorial I have set-up Apache and SSL and generated certificates for SSL communication. But I am not sure how to proceed, ie. how to serve only a few of my actions through SSL. Below is my configuration. The normal actions are working fine but I don't know how to configure SSL calls. WSGIScriptAlias / /home/ubuntu/myproject/src/myproject/wsgi.py WSGIPythonPath /home/ubuntu/myproject/src <VirtualHost *:80> ServerName mydomain.com <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> <Location "/login"> RewriteEngine on RewriteRule /admin(.*)$ https://mydomain.com/login$1 [L,R=301] </Location> </VirtualHost> <VirtualHost *:443> ServerName mydomain.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> </VirtualHost> Can you please help me out on how to achieve this? What am I doing wrong? I have read a lot of tutorials but honestly I am not really good at configurations. Any help is appreciated.

    Read the article

  • Parallels: How to see a Mac-hosted website from Windows?

    - by Jim Miller
    I'm traveling at the moment, and have moved one of the websites I'm working on to my MBP so I can work on it without a network connection. I've made an addition to the Mac's /etc/hosts file pointing the domain name to 127.0.0.1, and all's well. I now want to get into Parallels and check the site from Windows browsers. How do I get things so that the Windows browser will understand the domain name and access the site? The Windows image obviously doesn't recognize / can't find the Mac's /etc/hosts file, and references to 127.0.0.1 in the Windows hosts file just as obviously point to Windows, not the Mac. Any advice out there? Thanks!

    Read the article

  • How do I map a friendly name (e.g. www.example.com) to 127.0.0.1:port# on Mac OS X

    - by Fred Finkle
    I am trying to create a demo for a class of mine and I want to configure "fake" domain names on my laptop. A previous question "Can I specify a port in an entry in my /etc/hosts on OS X?" contained an answer indicating that to do it you must use /etc/hosts plus changes to the iptables "If OS X uses iptables you could point xyz.com to some ip in the hosts file like 157.166.226.25 and then: sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -d 157.166.226.25 -j DNAT --to-destination 127.0.0.1:3000 " Since OS X doesn't use iptables, how do I do the equivalent using the tools available on OS X? (the original "asker" seemed to know how to do this, so it wasn't explained). Thanks in advance.

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • Cannot authenticate to SBS 2003

    - by Lerp
    I am trying to connect my machine to my work's entirely windows network and I am having a few issues: Whenever I try to access the server, the authentication dialog just keeps popping back up. I cannot connect to the printers (it says connecting to device failed) I have tried setting up samba, winbind, kerberos, likewise open all to no avail. I have a feeling I am just setting them up wrong. My nautilus shows this when I go to Network Windows Network MASTERMAGNETS I can ping both MASTERMAGNETS.LOCAL and 192.168.0.2 after modifying my /etc/hosts james@jamesmaddison:~$ cat /etc/hosts 127.0.0.1 localhost jamesmaddison 192.168.0.2 MASTERMAGNETS.LOCAL 192.168.0.50 Sharp-Printer # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters I believe that's the correct domain (not sure if that's the correct term) as when I do nslookup MASTERMAGNETS.LOCAL I get the following: james@jamesmaddison:~$ nslookup MASTERMAGNETS.LOCAL Server: 192.168.0.2 Address: 192.168.0.2#53 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.3 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.2 It all worked fine before I reinstalled Ubuntu and now I just cannot get access to the server. All help is appreciated, I need to get this working or I fear I will be forced to develop in a windows environment :(

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >