Search Results

Search found 27295 results on 1092 pages for 'cross site'.

Page 194/1092 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • What is the general process of web hosting?

    - by ggfan
    I want to upload my site public so people can use it. I am currently using a free PHP webhosting company that supports up to a certian amount. When sites that say they offer unlimited upload, data, etc for like $10/month, is that all you need to run a big site? Or how do I host a big site, if it gets popular?

    Read the article

  • How can I forward all web traffic from my Cisco ASA 5100 to a checkpoint firewall?

    - by Scott Clements
    Hi, I currently have two Cisco ASA 5100 routers setup with a site-to-site VPN at different physical locations. They are successfully configured so that all traffic at our remote site is forwarded over this VPN tunnel to our router here, which is fine, however I need the web traffic that comes here to then be forwarded onto our Check Point firewall router. Can someone please tell me how I can configure this?? Many Thanks, Scott

    Read the article

  • Not able to save Global navigation - SharePoint 2007

    - by Ryan
    I have migrated my sitecollection(migsitecollection) to different farm using content deployment job. http://vsmoss/sites/migsitecollection I used collaboration portal to create it.Its working fine from where I migrated it but after running content deployment jobs my new migrated site global navigation settings are not getting saved when I am trying ot change them by going in settings-Navigation and in logs I can see this error The SPNavigation store is likely corrupt. I saw on net the solution for this problem is changing onet.xml and running script on sql database for the site, I am eager to better answer than this but if its the same I have few doubts on it: First,As my site template is not customised its the collobartion portal so I am not sure where exactly to change the onet.xml. Second, I am using the same database as of my webapplication running that script would not affect anything else on the main site of mine?

    Read the article

  • Advice for Storing and Displaying Dates and Times Across Different Time Zones

    A common question I receive from clients, colleagues, and 4Guys readers is for recommendations on how best to store and display dates and times in a data-driven web application. One of the challenges in storing and displaying dates in a web application is that it is quite likely that the visitors arriving at your site are not in the same time zone as your web server; moreover, it's very likely that your site attracts visitors from many different time zones from around the world. Consider an online messageboard site, like ASPMessageboard.com, where each of 1,000,000+ posts includes the date and time it was made. Imagine a user from New York leaves a post on April 7th at 4:30 PM and that the web server hosting the site is located in Dallas, Texas, which is one hour earlier than New York. When storing that post to the database do you record the post's date and time relative to the visitor (4:30 PM), the relative to the web server (3:30 PM), or some other value? And when displaying this post how do you show that date and time to a reader in San Francisco, which is three hours earlier than New York? Do you show the time relative to the person who made the post (4:30 PM), relative to the web server (3:30 PM), or relative to the user (1:30 PM)? And if you decide to store or display the date based on the poster's or visitor's time zone then how do you know their time zone and its offset? How do you account for daylight savings, and so on? This article provides guidance on how to store and display dates and times for visitors across different time zones and includes a demo that gives a working example of some of these techniques. Read on to learn more! Read More >

    Read the article

  • DNS hijack - prevention tips

    - by user578359
    Hi there, Over the weekend it looks like the DNS was hijacked on two of my domains. My set up is I have the sites registered on 1and1.co.uk, with dns nameservers pointing to Hostgator in the US where the sites are hosted. I also had cloudflare CDN running on the sites (via hostgator cpanel). My question is any ideas as to how this happened, and how I could either monitor it so I know if it occurs again, or strengthen the set up/service to minimise the risk. History: I received a ping from my site monitoring service that the sites were down. When I checked the sites were up so I assumed it was local to the monitoring service I received a ping last night the sites were up When I checked, one site was redirecting to download-manual.com (and checking that URL now, the home page is not the same as the one I saw, so they too may have been hijacked/hacked) The other site URL remained the same but had one of those standard site search pages which bounce you off to either phishing or paid for search sites I notified Hostgator who told me Cloudflare or 1and1 were the issue. I removed cloudflare, and contacted both them and hostgator, and am awaiting a response, but am not holding my breath. Is this common? I've never heard of this or come across this before. It's pretty scary that this can happen so easily. Appreciate any input. **Update: I've now spoken to support at 1and1, Hostgator, and Cloudflare, and each one claims it has nothing to do with them, and must be one of the others. Larry, curly, moe.

    Read the article

  • How to install ac-R mode in emacs?

    - by David
    I have recently added the file ac-R.el to /usr/local/share/emacs/site-lisp, along with (require 'ac-R) to ~/.emacs Now, when I open emacs with --debug-init, I get the error Debugger entered--Lisp error: (void-variable ac-modes) add-to-list(ac-modes ess-mode) eval-buffer(#<buffer *load*<2>> nil "/usr/local/share/emacs/site-lisp/ac-R.el" nil t) ; Reading at buffer position 7191 load-with-code-conversion("/usr/local/share/emacs/site-lisp/ac-R.el" "/usr/local/share/emacs/site-lisp/ac-R.el" nil t) require(ac-R) eval-buffer(#<buffer *load*> nil "/home/dlebauer/.emacs" nil t) ; Reading at buffer position 3548 load-with-code-conversion("/home/dlebauer/.emacs" "/home/dlebauer/.emacs" t t) load("~/.emacs" t t) #[nil "\205\264 and when clicking on load-with-code-conversion, it says Can't find library /usr/share/emacs/23.1.50/lisp/international/mule.el even though I have installed mule via synaptic (I am using Ubuntu 10.04) How can I get the mule library in the right place?

    Read the article

  • Windows 7 VPN Client Default IPsec Configuration?

    - by bwerks
    As far as I can tell, the windows VPN client doesn't provide a lot of flexibility in its IPsec settings. Assuming full configurability on the site end of a client-site VPN configuration, does anyone how to configure the site to match the windows client? Bonus points: how would I discover these settings for myself?

    Read the article

  • Password-free logins using your email address only?

    - by Mario
    The state of logins is horrendous. With each site having it's own rules for passwords, it can be very hard to remember what variation you used on any given site. Logins are pure pain. One thing I love about Craigslist is that it did away with logins altogether. I know this design may not suit every site, but there's something to their design that beckons to be repeated. OpenID is great on sites that have adopted it, but it's still not standard. Would it be feasible/wise to use an email address as a login and provide no password? The site would send a short-term key directly to your email address. You click on the link and you're in. When you're done, you "logout" and your key is terminated. I've toyed with this idea before. What concerns (i.e. spammers, bots, etc.) would make this impractical or unsafe and could they be overcome?

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • IE does not remember sharepoint password on saving

    - by pencilslate
    I am connecting SharePoint hosted site outside of my intranet through IE 8. While accessing the site, i am required to provide user name and password with an option to remember the password. Selecting the remember password doesn't seem to remember the password. It prompts every time accessing the site. Is there a workaround for this? Many thanks!!

    Read the article

  • How to configure custom error page in Plesk 9.3 for non existing folder?

    - by Junior Mayhé
    I'm trying to configure Plesk in order to show website visitors a custom error html. The current hosted site is an ASP.NET site. This site shows its custom errors on error403.aspx and error404.aspx files. Now to comply with plesk, I've created error_docs with required files like forbidden.html, etc... When user try to navigate http://mysite.com/a_missing_page.aspx, the visitor is redirected to error404.aspx correctly. But when user try to navigate to a non existent directory http://mysite.com/a_missing_folder/ the site takes me to IIS 404 regular page. Plesk has Custom error documents activated on Web hosting settings. ASP.NET Error pages defined in web.config are showing fine. But it seems plesk wont show its custom html error documents. The bottom line here is about setting up a custom error page to a directory. Is it possible to do this using Plesk or do I have to change it manually on IIS?

    Read the article

  • HTTP 401 Challenge and HTTP 302 Login/Redirect won't work together in IIS7

    - by RandomBen
    I am developing a website using .NET 3.5 that allow users to visit the site and create logins using the standard Microsoft login controls. However, users do not need to login to do general things like view products. Now I need to setup the site so some of our Traveling Sales people are able to access it but not allow anyone else to access it. The easiest way I know how to do this is to turn on Windows Authentication for the Site in IIS7. When I do that I get all sorts of errors due to also having Forms Authentication turned on. If I turn Forms Auth then I get a different kind of error. Does anyone know how to make Forms Auth and Windows Auth play nice on a single site in IIS7 or some other way to create a required login without having me kill Forms Auth?

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Wanna be a Rock Star?

    - by lydia.smyers
    We just launched the Oracle PartnerNetwork Specialized Partner Premier site, a new site specifically targeted to customers, partners and Oracle employees. The site is all about "You", our partners, who have achieved one or more of the over 50 specializations available today. Specialized. Recognized. Preferred. Yes, that's you - the chosen vendors. And just for you we're going to produce videos starring "You" and your stories which will available everywhere Oracle customers, partners and employees are. Sundance may be over but we're just getting started. This is just one of the new benefits, want more? Industry Analyst Firm, IDC offers kudos and says Oracle's premier "approach to putting top partners front and center with customers and prospects" has "raised the bar." We'll be advertising this site all over Oracle's major websites to give our specialized partners their deserved recognition. Bring on the buttered popcorn! Lights! Camera! What are you waiting for? http://www.oracle.com/specialized. Need to get specialized? Check out how: http://www.oracle.com/partners/en/opn-program/specialize/index.html

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Firefox or Chrome - how to force a specific encoding for a page

    - by Mike
    Hi, I am accessing an intranet site built by amateurs, that was constructed to be "best viewed by IE" (arghhh!). The site is in portuguese. All accented letters are jammed and do not appear as they should. As I create sites myself, I know that the best way to build a site in portuguese and other latin languages is to use the "charset=iso-8859-1" on the page's HTML encoding. This will ensure cross-browser and platforms compatibility. But I have no way to change this, because I am a visitor on this site. I don't know the encoding they are using. What I ask is: is there a way I can force my browser (Chrome or Firefox) to recode the page using the correct charset? I need this to work on Ubuntu.

    Read the article

  • Using Google Analytics to track Usernames

    - by DrStalker
    We have a SharePoint Installation (MOSS, IIS 7.0, Windows Authentication, Windows 2008) and Google Analytics has been installed to track site usage. The site is an intranet site, and all users are authenticated before gaining access. Is there any way in Google Analytics to track user information so we can see details of the login names of who is accessing content?

    Read the article

  • PublishingWeb.ExcludeFromNavigation Missing

    - by Michael Van Cleave
    So recently I have had to make the transition from the SharePoint 2007 codebase to SharePoint 2010 codebase. Needless to say there hasn't been much difficulty in the changeover. However in a set of code that I was playing around with and transitioning I did find one change that might cause some pain to others out there that have been programming against the PublishingWeb object in the Microsoft.SharePoint.Publishing namespace. The 2007 snippet of code that work just fine in 2007 looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {      PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.ExcludeFromNavigation(true, itemID);     publishingWeb.Update(); } The 2010 update to the code looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {     PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.Navigation.ExcludeFromNavigation(true, itemID); //--Had to reference the Navigation object.     publishingWeb.Update(); } The purpose of the code is to keep a page from showing up on the global or current navigation when it is added to the pages library. However you see that the update to the 2010 codebase actually makes more "object" sense. It specifies that you are technically affecting the Navigation of the Publishing Web, instead of the Publishing Web itself. I know that this isn't a difficult problem to fix, but I thought it would be something quick to help out the general public. Michael

    Read the article

  • Can PHP be run in Apache via mod_php and mod_fcgi side by side?

    - by Mario Parris
    I have an existing installation of Apache (2.2.10 Windows x86) using mod_php and PHP 5.2.6. Can I run another site in a virtual host using FastCGI and a different version of PHP, while stilling running the main site in mod_php? I've made an attempt, but when I add my FCGI settings to the virtual host container, Apache is unable to restart. httpd.conf mod_php settings: LoadModule php5_module "C:\PHP\php-5.2.17-Win32-VC6-x86\php5apache2_2.dll" AddHandler application/x-httpd-php .php PHPIniDir "C:\PHP\php-5.2.17-Win32-VC6-x86" httpd-vhosts.conf fastcgi settings: <VirtualHost *:80> DocumentRoot "C:/Inetpub/wwwroot/site-b/source/public" ServerName local.siteb.com ServerAlias local.siteb.com SetEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php.ini" FcgidInitialEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86" FcgidWrapper "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php-cgi.exe" .php AddHandler fcgid-script .php </VirtualHost> <Directory "C:/Inetpub/wwwroot/site-b/source/public"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Googlebot DNS error HostPapa

    - by Gravy
    Received a message from Google Webmaster Tools: Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 40.0%. You can see more details about these errors in Webmaster Tools. Recommended action Contacted HostPapa and they deny that there is any issue with the site / server!!! Support in terms of what I can do to actually resolve this issue is non-existent!!!! The site is currently online. And I don't know much about DNS... so any advice about what I can do to resolve this problem would be much appreciated.

    Read the article

  • SharePoint domain authentication

    - by JL
    I have a local domain controller setup, which is MYDOMAIN.com, and on a seperate local server I have a MOSS site running. the DNS is all working fine but when I try to connect to the MOSS site using domain credentials I can't use syntax: MYDOMAIN/MyAccount it is expecting [email protected] What can I do to fix this issue, so I have normal domain login capabilities like every other sharepoint site out there?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >