Search Results

Search found 56562 results on 2263 pages for 'gerald fauteux@oracle com'.

Page 703/2263 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • All computers on network get stuck waiting for some sites indefinetely

    - by zacaj
    This happens across three computers, running windows 7 and Ubuntu, firefox, opera, and chrome (all latest versions). I am connected to the internet through a Verizon wireless usb modem. When I try to open some web pages they will never finish loading (and usually never even show anything). The status bar at the bottom of the browser will display "Waiting for X" The servers it gets stuck on include: platform.twitter.com s7.addthis.com connect.facebook.net ajax.googleapis.com 2mdn.net Ive been getting away with just blocking them in AdBlock up until now, however the last two have been causing problems. There are some sites which require googleapis.com to load correctly, and some that wont ever load unless its blocked. eBay requires access to 2mdn.net to load pictures. On top of this its getting really annoying having to update AdBlock across all these computers whenever a new site pops up. I'm hoping there's some easier way to fix this? The different sites causing the freeze indicate to me that it's either a problem on my end (somehow?) or some server side software that got updated with a new bug?

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • Cant configure DNS properly on centos

    - by Nuker
    I am on a VPS i must manage my own. I have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). However i can enter my site without problems and so many other people as well. So i tested by making a php include like this <?php include 'http://mysite.com/somepage.php'; ?> and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I even tried by including content from yahoo.com or facebook and didnt work either. However the includes will work if i use IPs instead of domains. Do i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 I have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4 Thank you.

    Read the article

  • Apache directory access with virtual host

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Require valid-user

    Read the article

  • Google Chrome won't install via IE9/Windows7

    - by purir
    Using IE9, I've tried installing Google Chrome on Windows 7 from this url http://www.google.com/chrome/eula.html?hl=en-GB&platform=win But get the following error (apologies for uncouthly dumping this nonsense...) Any ideas dearly appreciated! PLATFORM VERSION INFO Windows : 6.1.7601.65536 (Win32NT) Common Language Runtime : 4.0.30319.235 System.Deployment.dll : 4.0.30319.1 (RTMRel.030319-0100) clr.dll : 4.0.30319.235 (RTMGDR.030319-2300) dfdll.dll : 4.0.30319.1 (RTMRel.030319-0100) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser resulted in exception. Following failure messages were detected: + The system cannot find the file specified. (Exception from HRESULT: 0x80070002) COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [25/06/2011 11:41:04] : Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser has started. ERROR DETAILS Following errors were detected during this operation. * [25/06/2011 11:41:04] System.IO.FileNotFoundException - The system cannot find the file specified. (Exception from HRESULT: 0x80070002) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.GetUserStore(UInt32 Flags, IntPtr hToken, Guid& riid) at System.Deployment.Application.ComponentStore..ctor(ComponentStoreType storeType, SubscriptionStore subStore) at System.Deployment.Application.SubscriptionStore..ctor(String deployPath, String tempPath, ComponentStoreType storeType) at System.Deployment.Application.SubscriptionStore.get_CurrentUser() at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • Only receiving one document at a time from new web server.

    - by Robert Kuykendall
    We're trying to move our internal ticketing system from a Microsoft Small Business Server in the server closet to a Rackspace Cloud Server. The install is Fedora 11 LAMP, and should be default out of the box, except for the vhosts appended to the bottom of the httpd.conf. The new server is suffering from crippling load times, and watching the page load in Firebug it's easy to see the problem occurring, but I can't figure out the cause. Here is the [old server] (http://rkuykendall.com/uploads/old.server.png). I was expecting something like this, but a little slower since it was no longer hosted locally. Instead, the [new server] (http://rkuykendall.com/uploads/new.server.png) appears to only serve one file at a time. Here's another example of this [staircase load time effect] (http://rkuykendall.com/uploads/staircase.png) and another very clear example of the [staircase effect] (http://rkuykendall.com/uploads/staircase2.png). I talked to some guys on Freenode #httpd with no luck. I created a duplicate server to play with, and also created a fresh server with Fedora Core 13 and moved over just the database and web files with no luck. Any suggestions? ( image links disabled due to n00b-spam-restrictions )

    Read the article

  • named responding recursive on norecurse queries

    - by Keks
    I have a server on which named is running. It is intercepted with another named server which it is not aware of. Querying the first named server results in timeouts. The server tries to resolve the query recursively. During that the firewall redirects the DNS Request from the first named server to the second one (the query from the first one is addressed to a e.g. a root server and has its "Recursion desired" bit set to 0). Despite that the second named responds to this request with a entirely or at least 1 level more resolved response than the first named server expects. So it ends up with a timeout even though it got a correct name server or even the full IP for the queried domain. In the first case the first name server tries to follow the authority domain ignoring the coresponding glue record and ends up in a loop it aborts: queried: google.com -> got from named#2: ns1.google.com -> ignore glue record and query: ns1.google.com -> got authority from named#2: google.com In the second case it ignores the answer section with the correct IP and instead tries to follow the name servers from the authority section, which ends up in the same dead end as case 1. So how can it be that the second named responds with recursive results even though the bit was explicitly set to 0 in the request from the first named?

    Read the article

  • Host is missing hostname and/or domain

    - by anlawang
    i use puppet 0.25.4 on ubuntu 10.04,when puppet installed ,i got the infor below : Nov 29 10:30:30 puppet puppetmasterd[4422]: Host is missing hostname and/or domain: pclient.example.com Nov 29 10:30:30 puppet puppetmasterd[4422]: Compiled catalog for pclient.example.com in 0.02 seconds i dont know how to fix it ,who can help me thank you ! my configuration : I use apt-get to install the puppet,so some configuration have been fixed puppet.conf on client : > [main] server=puppet.example.com > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=false > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post > certname=pclient.example.com > node_name=cert [puppetd] > runinterval=30 puppet.conf on server: > [main] logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=true > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post i user the default node on site.pp i am a newer to puppet,so i dont know the reason for these problems!! thank you again!!!

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • cPanel FTP account access to sym links from parent directory

    - by totbar
    I would like to give a potential developer temporary access to some of my projects. I have almost everything in its own subdomain, and each directory is a sibling to my public_html directory. It looks something like: ("developer" is the cPanel account name.) developer/ *This is the top level directory for the cPanel account. "/home/developer" site1/ *site1.mysite.com site2/ *site2.mysite.com site3/ *site3.mysite.com public_html/ *www.mysite.com ... etc I created a directory inside public_html called tempdev and I added symbolic links to each of the sibling directories listed above. My understanding of cPanel is that I can only assign one user with "Special FTP Access" per domain. I really dont want to give a complete stranger my login creds, (its just a development environment but still). So I used the cPanel FTP account creator UI. It will not allow me to assign the user access to the directories outside of public_html. I cant even give access to public_html either. So I made the tempdev directory in www and created the symlinks. Using the new account, I can see the symlinks, but I can go into them. Is there a better way to accomplish what I am attempting?

    Read the article

  • How do i Setup a Mac OS X Server - NameServer behind an Airport Extreme?

    - by basilmir
    I have a Mac mini server i want to setup to host a couple of things. My setup is as follows: The WAN connection (static IP and ISP nameservers) goes into the wan port of the Airport Extreme. The Mac mini server is connected to one of the ethernet ports. The mac mini will host my domain something.com. My settings so far: Airport Express gets: 96.x.x.x as the external static IP from the ISP 174.y.y.y as the nameserver Mac mini server always gets a reserved DHCP IP from the Airport Express: 10.0.1.3 is the server's ip 10.0.1.1 as the dns (this ip is the airport express itself) My dns server has an A record pointing to ns.something.com and a PTR doing the reverse. I've already added my 96.x.x.x to point ns.something.com with my registrar as attached. NOW: Nobody seems to be able to access my ns.something.com to resolve any of my records. From a any computer in my network I CAN see my ns and everything works. The outside on the other hand does not... it's as if the airport extreme which "holds" the exterior 94.x.x.x address doesn't pass DNS along to my 10.0.1.3 ns server. I have the server managing the airport. Isn't this supposed to work?

    Read the article

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • No outbound internet connection after restarting CentOS 6.3

    - by wnstnsmth
    After restarting a headless CentOS 6.3 machine, it lost outbound internet connectivity, i.e. I can still connect to the server via SSH (ssh root@**.126.18.56), but stuff such as ping google.com gives google.com: unknown host, and yum list some_package gives a lot of network errors. This is what ifconfig gives: eth0 Link encap:Ethernet HWaddr 00:25:90:78:2D:5D inet addr:**.126.18.56 Bcast:**.126.18.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe78:2d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75594 errors:0 dropped:0 overruns:0 frame:0 TX packets:787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7074741 (6.7 MiB) TX bytes:144391 (141.0 KiB) Interrupt:20 Memory:f7a00000-f7a20000 eth1 Link encap:Ethernet HWaddr 00:25:90:78:2D:5C UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:16 Memory:f7900000-f7920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:504 (504.0 b) TX bytes:504 (504.0 b) I have absolutely no clue how to debug this, and I find it very strange since I can still connect via ssh. EDIT: Weirdly, /etc/resolv.conf does not contain any entries, or none that I can make sense of: # Generated by NetworkManager search sui-inter.net # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com So is it possible that rebooting the server erased that file? It worked before at least! And how do I solve this? By the way, pinging an IP address works.

    Read the article

  • Can't access certain web sites - reset router, any ideas?

    - by IniTech
    EDIT: This problem was resolved by my ISP - had to do with damaged fiber in one of their locations. Thanks to everyone that helped. Not sure if this is the right site (I'm a StackOverflow user) so I thought I'd give it a shot. I'm having trouble connecting to certain sites on any of the 3 machines that are on my LAN. The following sites are returning "Problem Loading Page - The connection has timed out" Sourceforge.net CNet.com Microsoft.com OpenDNS.com even my company's webiste I was worried about possible malware/virus, but I don't think that is the case (given the inability to access my company's site and the fact that all 3 machines are having the same issues.) I've tried with IE8, FF, and Chrome I have reset my router (WRT54G) and my machine(s) multiple times. EDIT: It is also worth noting that this page spins constantly and no avatars show up (I'm assuming it is trying to access gravatar.com with no success.) EDIT: I have the same issues directly connected to the modem. So, any router config is probably not the issue I'm a programmer, not a network guy - any ideas?

    Read the article

  • sendmail appends server name to external domains when relaying

    - by Chris
    My server is set to send all email to a corporate relay server. For the company domain, it works perfectly. I've recently found emails being sent to an outside domain are getting the hostname of my server appended to the email prior to being sent. Here is the log entry for one such attempt. Nov 6 09:46:45 myservername sendmail[45023]: rA6EkjiI045023: to=user@outside.com, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30590, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (rA6Ekj2g045037 Message accepted for delivery) Nov 6 09:46:45 myservername sendmail[45061]: rA6Ekj2g045037: to=<user@outside.com.myservername>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=120885, relay=relay.company.com [x.x.x.x], dsn=2.0.0, stat=Sent (ok: Message 342335947 accepted) Notice the email address difference between it being accepted by my server for delivery (correct email address), and being sent and accepted by the corporate relay (incorrect with server name appended). To make it more interesting, the application on my server uses email for user account verification/activation. In August, this particular user was able to register his account and activate it. I have made no configuration changes to mail since setting the server up over a year ago. DNS is also a corporate service. I've never touched my /etc/resolv.conf configuration. domain company.com nameserver <ip1> nameserver <ip2> search myservername Thanks!

    Read the article

  • error while resolving DNS requires

    - by user2803887
    I followed this document to configure master-slave powerdns servers... http://linuxmanage.com/master-slave-powerdns-managed-by-poweradmin.html Installation completed perfectly no errors even I feel DNS is trying to resolve some queries and parameter.. but while going through intodns.com i get below error for domain names which i have created in powerdns name server installed as above guide. Error Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. Error Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Error Missing nameservers You should already know that your NS records at your reported by your nameservers are missing, so here it is again: nameservers ns1.makeittiny.com. ns2.makeittiny.com. I am much new to powerdns so not able to figure out where problem.. i check all things but not able to make out where problem remains.

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Connection reset to some websites

    - by user143271
    I'm using a 2wire 3600HGV modem/router. Starting around this afternoon, any time I try to access anything from i.imgur.com I get The connection to i.imgur.com was interrupted in chrome, and the actual error is Error 101 (net::ERR_CONNECTION_RESET). It's network wide (tested multiple browsers on multiple computers and phones). I can access imgur.com just fine, but nothing from its content server i.imgur.com. If I disable wifi on my phone and use its 4G connection, I can access it just fine, so obviously imgur isn't down. I haven't changed any configuration on the router, and I have tried changing DNS servers (I tried google and OpenDNS). It also seems that imgur is not the only site; howtogeek and a couple of others seem to have the same problem. It looks like they are all edgecast cdn content servers, but not all edgecast cdn servers fail. Tumblr, for instance, works just fine. Does anyone have any idea what would be causing this? EDIT: Related to the edgecast remark, it would appear that this is a specific edgecast server: gs1.wpc.edgecastcdn.net. Tumbler's content is on gs1.wac.edgecastcdn.net, so it might be on a different server. Edit #2: These sites all respond to ping just fine as well.

    Read the article

  • What could cause random files being uploaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >