Search Results

Search found 9717 results on 389 pages for 'pro'.

Page 100/389 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Hosting a web application on discountasp.net using sql ce 5

    - by David Stanley
    I am hoping that someone may have experience with this, since the discountasp site is very lacking in straightforward answers. I am building a lightweight web application and have decided to have sql ce as the database for it. Two questions regarding this: Do i need to get an actual database hosted as well as the site, in order for it to work? Do you know if discountasp supports the use of sql ce (not with webmatrix or any cms builds, completely custom)? If they don't, do you have any experience/recommendations with getting this done?

    Read the article

  • Error when setting Piwik analytics

    - by bertran
    I've uploaded the latest version of Piwik unto my web server, which is hosted by go daddy.com, on a linux hosting plan. I'm setting it up (accessing it from my browser as instructed) and I have the "Piwikinstallation" page open on step 3 (database set-up ) of 9. I don't know what to imput in the field "database server"... the default is the number 127.0.0.1 When I leave that input as is, and click "Next" leaving the gives the error: "Error when trying to connect to database server: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111" and changing that input to "localhost" gives me another error: "Error when trying to connect to database server:SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)"

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • CloudFlare DNS: Downtime failover host

    - by Dr. McKay
    My company uses CloudFlare for its DNS, but as our site is HTTPS-secured and we're on the free plan, we can't utilize CloudFlare's CDN services. Our host has fairly rare but not insignificant downtime. We can't migrate servers just yet, and I'd like to be able to either have the main domain redirect to the status domain, or simply resolve to the alternative status host in the event of downtime so users will stop bugging me asking if the site is down. Is this possible to do automatically using the free CloudFlare plan, or will I have to manually edit my DNS every time the site goes down?

    Read the article

  • No cache and Google AdSense performance

    - by Luca
    I'm developing a page where I need to avoid JavaScript caching by browser. I've added this header: <?php header('Cache-Control: no-cache, no-store, must-revalidate'); header('Pragma: no-cache'); header('Expires: 0'); ?> After this, browsers didn't cache more JavaScript sorting out the issue, but in the same time I noticed a drop in Google AdSense RPM. Then I removed the added code and now Google AdSense RPM is reaching again a good value. So, how could I avoid JavaScript caching without meddle with AdSense performance?

    Read the article

  • Seeking .htaccess help: Converting multiple subdomains (both HTTP and HTTPS) to www.domain.com using .htaccess

    - by Joshua Dorkin
    I've been trying to get an answer to this question on other forums (the folks at SuperUser thought this was the place I needed to post) and via my connections, but I haven't gotten very far. Hopefully you guys can help me find an answer. I've got a dozen old subdomains that have been indexed by Google. These have been indexed as both HTTP AND HTTPS. I've managed to redirect all the subdomains properly, provided they are not HTTPS, but can't get any of the HTTPS subdomains to property redirect. Here's the code I'm using: RewriteCond %{HTTP_HOST} ^subdomain1.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^subdomain2.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^subdomain3.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] This works great until someone goes to: https://subdomain2.mysite.com$ which is not redirected back to http://www.mysite.com$ How can I get this to work? Additionally, I'm guessing there is an easier way to make it happen than setting up a dozen pairs of RewriteCond/RewriteRule? Is there any way to do this in just a few lines, including one where I list all the subdomains? I'd actually also like to redirect everything on https://www.mysite.com$ to http://www.mysite.com$ except for 3 folders. These are mysite.com/secure, mysite.com/store, mysite.com/user. Is there any good way to add this to the .htaccess file?

    Read the article

  • Proper way to create and work with a subdomain?

    - by Genadinik
    My site got effected by Panda, and I am trying to see if making a subdomain would work. The site is comehike.com, and I created a subdomain which is currently empty at hiking.comehike.com I have a directory /outdoors that has some high quality hand-written articles. I want to put those into the new subdomain to see what would happen. My questions are: Should I just copy and paste the files for those pages into the new subdomain's folder, and just change all the links in all my pages from the original domain to the new subdomain? Should I just do a 301 redirect to the new subdomain? Since test.site.com and www.site.com are different domains, will the new page have to start from scratch in terms of Pagerank, and its rankings in the SERPs?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Domains with similar names and issues

    - by abel
    I recently purchased one of those domain names like del.ico.us. While registering I found that delicious.com was being used. Argument: I found that delicious.com belonged to the same category as my to-be website. It served premium delicious dishes. Counter Argument: My to-be domain though belonging to the same category, specialized in serving free but delicious dishes or in giving out links(affiliate) to other sites serving premium delicious dishes. Additional Counter Arguments: 1.delicious.com was not in English. 2.the del.icio.us in my domain name though having the same spelling, is not going to be used in the same fashion. For eg.(this may not make sense, because the names have been changed)the d in delicious on my website actually stands for the greek letter Delta(?/d) and since internationalized domains are still not easily typable, I am going for the english equivalent.The prefix holds importance for the theme of the service which my website intends to offer. My Question: Can I use the domain name del.icio.us for my website? How are these kinds of matters dealt? (The domain names used are fictitious. And I have already registered the domain but have not started using it.I chanced upon this domain name because it was short, easy to remember and suited the theme of my website.)

    Read the article

  • Broken Links in different browsers

    - by kdorival
    Hi I'm having problems with our website, http://www.accessiblehomehealthcare.com, which is a wordpress 2.7 (version). All of a sudden our RSS links broke on the right side, which has happened before and I fixed it within 5 mins. Now, when I fix it, it doesn't look right in different version of I.E. or Firefox, I have I.E. 8 and Firefox 3.6.15 and it looks good for the most part, but there are a few parts where the links are broken. One browser the links would look ok but go to another page and the links or logos would be broken. Certain parts of the website should be static(identical) to the other pages of the site, but if a link is broken on one page, its perfect on another page. I was wondering was there a secret code for wordpress to keep the sites compatible with all browser versions or is there a bigger issue???? Any help or suggestions will help???

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • Free or Low Cost Web Hosting for Small Website [duplicate]

    - by etangins
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I have a small website (between 2000 and 10000) page-views a day. I'm looking for a free or low cost web host. I tried 50webs.com but their server breaks down. So as not to cause debate, I am also just looking for links to good information sources for web hosting if just finding a good web host is too general. I currently only use HTML, CSS, and JavaScript though I'm considering learning PhP and other more advanced languages to step up my game.

    Read the article

  • Renewing a SSL certificate with GoDaddy

    - by Flavien
    GoDaddy sells SSL certificates for $12 per year (the most basic one). I have bought one of those last year, and now is the time to renew. However they are now asking for $50 for the renewal (the $12 is apparently a discount). Is there a way to get the $12 price for a renewal. Is it going to work if I buy a new certificate at $12, and use the same host as the one I had before, or are they going to prevent me from doing that?

    Read the article

  • for a blog with posts and categories what are all the best ways to create user friendly and seo friendly urls

    - by Jayapal Chandran
    I am creating a module in my website which displays ringtones. it is like creating blog posts and categories It will have categories(tags) and posts. (i am using category and tag interchangeably) i am using the following linking for this module sitename.com/blog sitename.com/blog/category/category-name-slug/ - will list all ringtones of that category/tag sitename.com/blog/title/name-slug-of-the-ringtone/ - this will display the details and a download link in all page at the left i display the category/tag . This is how i have formed the url structure. it will be user friendly i hope yet will it be seo friendly? Please hint if i am missing something or other ways to improve. meanwhile i am browsing the net to get more information on linking content (categorizing) and to find best ways for the user and search engine.

    Read the article

  • Why do my Google sitelinks show gibberish for a PDF link?

    - by Tom
    I have a website which Google lists nicely along with site links. One of the site links - to a PDF file - shows un-human gibberish e.g 67,8;45:: 56 83 @7<1. (7/0;,*;: /59( (7/0;,;<7, <7)(60:4 (9<7 /+ +2, VU I thought it might be due to the PDF's title property so I changed it. But there hasn't been an improvement to the site link. Other PDF site links are fine and display the title property as desired. Does anyone know how I might rectify this problem or what might be the cause? My uninformed guess is it's some transliteration problem between code and display text which, I suppose, means I ought to recondition the PDF file in some way. Not sure how.

    Read the article

  • New TLDs and Their Impact on SEO [duplicate]

    - by Lynda
    This question already has an answer here: When, if ever, and how, would Google, Yahoo, Bing, etc will add the new gTLD to their search results? 3 answers With a whole slew of new TLDs becoming available how will they affect SEO? While I realize some of these domains are fairly general, such as .ninja others are very specific such as .dental and this leaves me curious as to how this new TLDs will ultimately affect SEO?

    Read the article

  • Is it better to have AWS EC2 and RDS is the same Availability Zone?

    - by Dan
    I run a web app in an AWS EC2 instance and the database for the app in an RDS instance both in Amazon Web Services Region East-1. However, one of them is in Availability Zone 1a and the other is in 1d. Am I getting all the speed benefits of having both instances in the same "data center" (East-1) even if they are in different Availability Zones, or can I optimize by moving them to the same Availability Zone?

    Read the article

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • Prevent mail flagged as spam when switching mail servers (new SPF records)?

    - by Jakobud
    For our business, we send out a significant amount of newsletter alerts to customers that sign up for it on our website. We used to send this mail directly from our web server via PHP. But because the web server limited us to the number of emails we could send per day, we purchased a VM server at a different host (that doesn't throttle email) and we are going to use that account solely for sending out the emails. Anyways, now that the SPF records are going to be different from what they used to be and the source mail server is different, what steps need to be taken to prevent these emails being flagged as spam? I know in Gmail, it's pretty smart about determining if the person actually sending the email is sending it from the server it expects (for flagging Phishing emails, etc). We don't want that to happen to our emails. Just sending a couple test emails out, Gmail's shows the SPF record saying: Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.23.176 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] So is there anything we need to do with regards to SPF records as we move forward?

    Read the article

  • Using hreflang to specify a catchall language

    - by adam
    We have a site primarily targeted at the UK market, and are adding a US-market alternative. As per Google's recommendations: To indicate to Google that you want the German version of the page to be served to searchers using Google in German, the en-us version to searchers using google.com in English, and the en-gb version to searchers using google.co.uk in English, use rel="alternate" hreflang="x" to identify alternate language versions. Which gives us: <link rel="alternate" hreflang="en-gb" href="http://www.example.com/page.html" /> <link rel="alternate" hreflang="en-us" href="http://www.example.com/us/page.html" /> We do get enquiries from other areas of the world - particularly where there are expat communities (Dubai, UAE, Portugal etc). By adding the above tags, is there a risk that Google will only surface our site for UK and US search users? Do we need to specify a catch-all that will default all other searches to our UK site?

    Read the article

  • A record DNS, nameserver help

    - by Josip Gòdly Zirdum
    I just installed kloxo on my vps and I want to link my domain to that server...which it sort of already is...I made it connect to it via an A record. That works, like the IP is pointing to my server but how do I make a website using it. I tried adding the domain but this doesn't work ...I feel i'm not explaining this well um, on my server it asked me to create DNS template so I did I created the nameservers ns1.mydomain.com, ns2.mydomain.com Then I added the domain to the panel mydomain.com I go to the folder it creates to it, but no matter the file there it wont work...any ideas? Is there a way to possibly just not even add a domain to kloxo and just kind of treat the ip of the server as the domain? Since the A record points there anyway? I don't intend to host another website on the server anyway...

    Read the article

  • How to verify real people?

    - by Gerben Jacobs
    For a music community, I want bands to be able to verify themselves. What is the best way to do this? For example I could let the record label mail me, but some bands are indepedent. I could also ask them to put a 'code' on their website or Facebook page and then check manually. I'm not per se looking for a waterproof solution, so no scanning of real life documents and I'm okay with doing the checks manually. In other words, how can you verify real people with their virtual presence?

    Read the article

  • Content theft - Where can I go from here?

    - by Toby
    I am the webmaster of a very successful blog in a fairly small niche. Recently our success has started to bite us with people copying posts on the site without consent and trying to pass them off as their own work. Most sites stop as soon as you contact them but there is one in particular that is a blogger site which persists in passing off our content as their own. Every post we find we report to Google and they have been fairly good at taking the posts offline within a day or two but this isn't good enough or a long term solution. Given the nature of what is being blogged about after 24 hours the post is pretty much useless so I need some way to just stop them from taking our content? Any ideas? I don't want to go down the route of using a third party for people to get our RSS feed but I guess that is one option?

    Read the article

  • Symbolic link not allowed or link target not accessible: /var/www on Ubuntu 11.04

    - by Jamie Hutber
    I am getting a 403 when i access http://mayfieldafc.local/ upon looking in the apache logs i am getting [Wed Nov 16 12:32:59 2011] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www I have what i believe to be the correct permissions set on /var/www. hutber can create and delete files, hutber being my user. I can also execute as program on this folder. in mayfields vhost its: <Directory /var/www/mayfieldafc/docroot> Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> I am pulling my hair out not being able to work on my sites with my work ubuntu install. I know of nothing else that could be effecting this. So any ideas?

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >