Search Results

Search found 1317 results on 53 pages for 'webmaster sean'.

Page 43/53 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Redirect To Domain Before SSL Is Read

    - by Devin Dixon
    I had to switch servers and I want to redirect all SSL urls to the non-ssl site. The problem I am running into is the https site still throws invalid certificate error even through apache has the redirect implemented. <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /data/sites/www.example.com/main/ RewriteEngine on Redirect 301 / http://www.example.com SSLEngine on SSLCertificateFile /etc/httpd/ssl/www.examplecom/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/httpd/ssl/www.example.com/ssl-cert-snakeoil.key ServerName www.example.com ErrorLog "logs/example.com-error_log" CustomLog "logs/example.com-access_log" common </VirtualHost> My question is, how can I do a redirect and avoid the invalid ssl certifcation error in the browser?

    Read the article

  • ArchBeat Link-o-Rama for December 4, 2012

    - by Bob Rhubart
    Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups | The Old Toxophilist "Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers," says The Old Toxopholist, "there are cases where it is extremely important that vServers run on distinct physical compute nodes." There's plenty more on the subject in his blog post. Oracle Endeca (2.3) Record Level Security | Adam Seed Adam Sneed's blog post covers "the basics of security within Endeca Information Discovery, as these basic security objects are required in order to explain the implementation of record level security." ODI Handling DQ | Gurcan Orhan Oracle ACE Director Gurcan Orhan suggests you have fun with these scripts for Oracle Data Integrator. Parleys Testimonial at GlassFish Community Event - JavaOne 2012 Video of Parley's webmaster Stephan Janssen's presentation at the GlassFish Community Event at JavaOne 2012, in which he explains why Parley's moved from Tomcat to GlassFish. Java Spotlight Episode 109: Pete Muir on CDI 1.1 This edition of Roger Brinkley's Java Spotlight Podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Muir talks about the features in CDI 1.1 and what to expect in the future. Webcast: Java Management Extensions with Oracle WebLogic Server 12c Dr. Frank Munz and Dave Cabelus do the talking in this on-demand webcast focused on Oracle WebLogic Server 12c with Java Management Extensions (JMX). Using the Coherence API to get Portable Object Format bytes | Bruno Borges Bruno Borges shares a code snippet that illustrates how easy it is to use the Coherence API. Thought for the Day "Experience is something you don't get until just after you need it." — Anonymous Source: SoftwareQuotes.com

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain?

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • best way to host multiple wordpress site on single vps [migrated]

    - by Ben
    Not sure if this is webmaster or a WordPress question, it's a bit half and half, sorry if I'm posting in the wrong place. Without using Multi-Site or installing new WordPress CMS' in second-level domains, what's the best way to get multiple WordPress installs running on my VPS (running Linux powered CentOS 6 with WHM and cPanel)? It's currently working but only by setting the permalinks option to the default setting, so the URLs aren't human-friendly. I have come across something called WPSiteStack, though I'd really rather not go down this route. Long story short, I need the following: Seperate installs so one core / theme / plugin update doesn't affect all sites and increases security of all sites; 'Pretty' permalinks; Each WordPress install must be in the root of it's own domain to ensure that I can accurately measure my clients' quotas; It may also be worth noting that some functions within each install use the $_SERVER['DOCUMENT_ROOT'] and $_SERVER['HOST'] variables. I have already edited the httpd-vhosts.conf, httpd.conf and .htaccess files but this hasn't made any changes. So any ideas what I'm missing or doing wrong? Any help is much appreciated.

    Read the article

  • Apache Configuration Issue - website without www going to default site

    - by Brian
    I have included a copy of my virtual host file for apache below. (However I have hidden the ip address and domain name for now) My problem is that the following work: www.mydomainnamehere.org www.mydomainnamehere.com mydomainnamehere.com This one doesn't work: mydomainnamehere.org - instead of going to the document root listed below, it goes to the document root of the default site. What could be causing this? <VirtualHost [ipaddresshidden]:80> ServerAdmin [email protected] ServerName mydomainnamehere.org ServerAlias www.mydomainnamehere.org ServerAlias mydomainnamehere.com ServerAlias www.mydomainnamehere.com DocumentRoot /home/www/mydomainnamehere.org/html/ ErrorLog /home/www/mydomainnamehere.org/logs/error.log CustomLog /home/www/mydomainnamehere.org/logs/access.log combined </VirtualHost>

    Read the article

  • What is the `ServerName` attribute for apache2 and what does it do?

    - by freddydoggie
    I do not know what this config setting means. Does it mean that it registers a domain name? Is it like DNS? Here is what I have for my apache2 default config ServerName staugie.org ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks Indexes MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> also, is there any way to register a free domain through the apache foundation?

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Question about exim4 config syntax

    - by PeterMmm
    I'm trying to send a notification to the sender of a message when a message is send to exactly one address in the local domain ([email protected]). Q1: How would be the syntax for the condition (the above don't work) ? : notify_reply: driver=accept domains = +local_domains senders = ! ^.*-request@.*:\ ! ^bounce-.*@.*:\ ! ^.*-bounce@.*:\ ! ^owner-.*@.*:\ ! ^postmaster@.*:\ ! ^webmaster@.*:\ ! ^listmaster@.*:\ ! ^mailer-daemon@.*:\ ! ^root@.*:\ ! ^noreply@.* condition = ${if eq {$received_for}{[email protected]}} no_expn transport=notify_transport unseen no_verify Q2: How to write multiline string in the config file for "text" ? : notify_transport: driver=autoreply [email protected] to=$sender_address subject=Your mail for text="Please resend your messasge to [email protected] This is a temporary modification."

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • Choosing open source vs. proprietary CMS

    - by jkneip
    Hi- I've been tasked with redesigning a website for a small academic library. While only in charge of the site for 6 months, we've been maintaining static html pages edited in Dreamweaver for years. Last count of our total pages is around 400. Our university is going with an enterprise level solution called Sitefinity, although we maintain our own domain and are responsible to maintain our own presense. Some background-my library has a couple Microsoft IIS servers on which this static html site has been running. I'm advocating for the implementation of a CMS while doing this redesign. The problem is I'm basically the lone webmaster so I have no one to agree or disagree with my choice. There are also only 1-2 content editors right know for the site but a CMS could change that factor. I would like to use the functionality of having servers that run .NET and MS SQL but am more experience setting up and maintaining open source software like Wordpress or Drupal on web hosts. My main concern is choosing a CMS that will be easy to update / maintain / deal with upgrades (i.e., support) in case I'm not there in the future. So I'm wondering how to factor in the open source CMS vs. a relatively inexpensive commercial CMS decision and whether choosing PHP/MySQL vs. ASP.net framework for development environment will play into my decision. Thanks for any input that can be offered based on the details I've given. Thanks, Jason

    Read the article

  • how to set auto redirection in tomcat

    - by Registered User
    I have a site http://social.openitup.in right now what you are seeing is a default Tomcat6 page. I am using mod_ajp as a front end and Apache vhost configuration for same is <VirtualHost *:80 > ServerName social.openitup.in ServerAdmin webmaster@localhost ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://192.168.1.19:8009/ ProxyPassReverse / ajp://192.168.1.19:8009/ </VirtualHost> How ever I have an application running on it http://social.openitup.in/olat what I want to do is when some one opens http://social.openitup.in then rather than seeing Tomcat6 home page from /var/lib/tomcat6/webapps/ROOT/index.html the person is redirected to olat application which is in /var/lib/tomcat6/webapps/olat how can this be achived? The above vhost configuration is on a machine separate than where OLAT is running.

    Read the article

  • Will search engines discover that our old pages have been 301 redirected if there are no more links to them in the old site?

    - by Obay
    We've moved our website to a new domain. Thousands of our pages come from one PHP file in the old site (e.g. oldsite.com/news.php?id=<id>). So we added some code in news.php file to do a 301 redirect to the specific corresponding news article in the new website (newsite.com/news/<id>). We have not yet done a 301 redirect for the root of the old site (so we could display a notice to our users that we've moved), but all links inside it are already 301 redirected. My concern is that, when Google crawls our old website, it will no longer be able to find the old news articles and discover that they have been 301 Redirected -- is this correct? If so, does that mean our PageRank won't be carried over to the new site? I've also read that we would need to create a sitemap for the new site. Is it possible to indicate in the sitemap the old and new locations of specific pages? Because if not, how will Google know? (I'm not sure change of address in Webmaster Tools would be specific enough).

    Read the article

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • Subdomain redirect to WWW

    - by manix
    I have the domain example.com and the test.example.com running on apache server. For some reason when I try to visit test.example it is redirected to www.test.example and by consequence a Server not found error is displayed in the browser. Both .htaccess (root and subdomain folder) files are empty. Additional facts I have another subdomain xyz.example.com pointed to public_html/xyz directory with some content inside (index.html with "hello world message") and it works fine if I use xyz.example.com instead of www.xyz.example.com. So, can you help me to point to the right direction in order. I have a vps and I am able to change any file if is required. Below you can find my virtual host configuration. <VirtualHost xx.xxx.xxx:80> ServerName test.example.com ServerAlias www.test.example.com DocumentRoot /home/example/public_html/test ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/test.example.com combined CustomLog /usr/local/apache/domlogs/test.example.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ScriptAlias /cgi-bin/ /home/example/public_html/test/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/example/test.example.com/*.conf" </VirtualHost>

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • What are some good courses to take my programming to the next level?

    - by absentx
    I am in search of either some in person, or online training that could take my coding to the next level. I am looking to attack two specific areas: Javascript: While I have been getting by with javascript for three or four years, I still feel like it takes a back seat to my other programming. I use Jquery a lot but would prefer to be proficient in pure JS also. PHP: I feel pretty proficient at PHP but I know there is only room to improve. Here I am interested in something that can teach me the more advanced aspects of the language, improve my code writing and perhaps cover object oriented php in depth also. I have looked into Netcom's training courses before but I can't tell if there advanced webmaster professional would be a good fit or not. Seems kind of like a force fed course but I am interested in it because I am looking for something in the one to two week range that is targeted at what I am looking for. I have zero experience with any type of online courses in terms of programming. It appears lots are available, but I am not sure on the quality.

    Read the article

  • Apache mod_proxy to another server

    - by trobrock
    I am using the proxy_balancer in Apache2 to proxy requests to a Rails application to my rails server on the port the application is running on. This is how its set up... Rails Server Mongrel running on port 8000, when accessing the url directly to http://rails_server:8000 the site loads fine Apache Server Conf file for the site: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myserver.com ServerAlias application.myserver.com <Proxy balancer://application_cluster> Allow from localhost BalancerMember http://ip.to.server:8000 retry=10 </Proxy> ProxyPass / balancer://application_cluster </VirtualHost> The problem I am having is going to http://rails_server:8000 works fine, but going to http://application.myserver.com Loads the right content, but is displaying all the HTML as text and not rendering it as html

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • How to receive mail in Qmail?

    - by Ivan
    I've a server that uses Qmail. It is installed by default and it is supposed to work. I've created a new domain and new user (vadddomain + vadduser) without problems, but when I send an email from Gmail to [email protected] (the address I've created) it desappears, it is. But if connect to SMTP server directly (telnet domain.com 25) and post an email it arrives to the user queue. What's happening?!? Note: If I try to access to my user through telnet domain.com 110 it seems my pwd is not correct and it's the same I used when created the user with vadduser

    Read the article

  • Which Language Next? Python? Ruby? [closed]

    - by Ryan Craig
    I am a beginning Webmaster (relatively), with 2+ years of php experience. I also have some java training and a bit of .net. My company is now close to redeveloping the website that I work on, which is coded primarily in php, but has some poorly-written .net in part as well (it's confusing and ill-planned, but I didn't make any of those decisions. Can anyone say action-oriented .net and JScript?). So, I'm trying to decide which language I should learn next to quickly develop a new site. I will probably just redevelop it at first in php because I'm very comfortable with it. However, I'd like to migrate in the next year to something newer and more forward-thinking. This being said, .net is out of the question a little bit. We need cheap developers who are fast and can get pages up quickly. In this part of the country, part-time .net developers are hard to find. So, we need something that will be pretty standard in the next few years, but we have some .net SOAP 1.1 APIs that we use on our actual service (separate from the corporate website), that we will need to integrate part of the site with. Developing with php and SOAP is much more difficult than doing the same thing. So, I may have to develop the API collaborative part in .net just to be easy, and then I'd like to use something else that is fast, flexible, forward thinking, and will be relatively standard and easy to find developers for. So, any ideas? Python and Django? Ruby on Rails? Another framework? Thanks for your thoughts. Sorry, I know this was long, but it's all very convoluted and confusing so I needed to be slightly long-winded.

    Read the article

  • php programming

    - by HARSHA
    Hi, i am learning php,i downloded the xampp.and Apache server,Mysql are running properly in Xampp Control panel. I tried with a simple program that is hello world,i created a new folder in htdocs, and i saved my program in that new folder with .php extention. But when i run the program then is showing a error as follows ------ Object not found! The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 localhost 18-5-2010 11:51:44 Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >