Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 158/389 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • What is the best way to promote a programming blog?

    - by paul
    (The guys from 'Programmers' referred me here...) How do you promote your programming blog? I recently started http://blackforestcoder.blogspot.com/ to record my progress working with new technologies and ideas. The main aim being to provide a list of pitfalls and solutions and also to get feedback from readers. Since I set it up 10 days ago I have only had about 2-3 hits even though Google is supposed to be indexing it. How might I boost the hit rate?

    Read the article

  • Serious Forum Design Issue

    - by John
    I want to launch a school forum for a country. I'm in dilemma as to how to design this forum. Country will have many states, each state will have cities, each city will have many schools. Each school will have many classes( 1st - 12th). It looks like this: States - Cities - Schools - Classes I want to make each class as forum. Now the biggest problems are two fold: If I were to list even 1000 schools it will create huge number of forums and I don't think forum softwares(PHPBB, MyBB) are designed to support these many Secondly creating and maintaining huge forums is also very difficult. Ideally I'd like to duplicate existing forum hierarchy to new one. I've done lot of search and I've found that such type of forums are infeasible and such designs don't work. Leave alone the fact the nobody uses such a forum. Keeping in mind this, how should I go about designing this forum?

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • My site has crashed .. anyone have some info ?

    - by marwan
    Hi all , I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks

    Read the article

  • When to reply 400 Bad Request

    - by KajMagnus
    According to www.w3.org, a Web server should reply with status code 400 Bad Request if: "The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications" Does that mean only request that violates some HTTP spec? Or does it include a request that my particular Web app thinks is broken? When would you reply 400? For example, if my Web app expects a query string to always include a "function=..." parameter, would you reply code 400 Bad Request or 403 Forbidden? (403 means that "The server understood the request, but is refusing to fulfill it.")

    Read the article

  • Page for Page Redirect in Google App Engine

    - by clifgray
    I have recently changed domain name for a webapp I run on Google App Engine and I am wondering if there is a simple way to do a page for page redirect from my old website to the new domain. Everything code wise is staying exactly the same but I just want it to go to the new domain. I am using python and the webapp2 framework for the webapp. I know I could go through and for every single handler do: webapp2.redirect('the specific url', permanent=True) But I am hoping for a simpler solution.

    Read the article

  • Forum widget for website

    - by Ivan Kuckir
    I have Discussion section at my website and I am getting 3 - 10 comments each week (it may increase in the future). I am using one Disqus "widget" for the whole discussion, but I would like to give it some better structure, with threads, categories etc. Do you know about some "forum widget" service, with functionality of phpBB (threads, categories, ...) and simplicity of Disqus (installing with IFRAME, login with Facebook/Golge, ...) ?

    Read the article

  • do you still get a bounce in google analytics if all the linked pages/content is loaded dyanmicly?

    - by sam
    Google analytics describes a bounce as a user that visits and leaves before after their first page. But if your site is a one page site, with content loaded dynamicly using javascript you could have a user one your site go through loads of info, text images but would that still count as a bounce ? Or once they click on an a-tag even if it is <a href="#"> can google analytics see that ? (im aware of click tracking in analytics) but i was wandering if google picks up these clicks by default..

    Read the article

  • First Project a big one, How much should we charge?

    - by confuzzled
    Two of my cousins and I started a freelance computer repair/web design business just to make some money on the side during college, and received our first major web design project about three weeks ago. Now we've created websites before, but it was mostly for family businesses and have never really charged money, and most of the websites have been static, and don't really require a CMS. This project, however, was a big one (for us anyways). We created a news site that had several categories, we created the banners, we created a classifieds page (not a web app just something static that they control). Several links, a few graphical assets, CSS drop down menu, RSS feed from a different news site, weather, all the normal stuff you would find on a regular news site. On top of that we put in all the usual Joomla stuff (search, Jcomments, Jslide pictures, JCE, etc.). Then we uploaded the first 10 articles they gave us, and we are going to train them how to use Joomla. Now, at first we decided for 700 dollars. I assumed they just wanted a simple blog like website where they can upload articles. But then we had a meeting, and they asked for a lot more. Note: we did not hard code the template from scratch, but customized the gantry framework to fit their needs. We did code quite a bit however. I estimate that we put in about 50-60 hours in total. I'm wondering if 700 dollars is a bit low, this price is definitely not set in stone. Please keep in mind that this is our first project, and we are newbies, please be kind. Thank You!

    Read the article

  • Is there a class or id that will cause an ad to be blocked by most major adblockers?

    - by Moak
    Is there a general class or ID for a html element that a high majority of popular adblockers will block on a website it has no information for. My intention is to have my advertisement blocked, avoiding automatic blocking is easy enough... I was thinking of maybe lending some ids or classes from big advertisment companies that area already being fought off quite actively. right now my html <ul id=partners> <li class=advertisment><a href=# class=sponsor><img class=banner></a></li> </ul> Will this work or is there a more solid approach?

    Read the article

  • Perl script rendered in browser as code through symlink - fine when accessed directly

    - by John Dittmar
    I have a Rails 4 app that has some views that post to Perl cgi scripts. The perl scripts are accessed via a symbolic link to a folder called "cgi-bin". When I navigate to a perl script through the symbolic link they are rendered as text instead of executed (ie: localhost:3000/cgi-bin/test.cgi), however when I access them directly they execute without issue (ie. localhost/path/to/cgi-bin/test.cgi). I am using apache2 on os x. In the directory localhost/path/to/ I have an .htaccess file that contains the following: # General Apache options AddHandler fastcgi-script .fcgi AddHandler cgi-script .cgi Options +FollowSymLinks +ExecCGI I have the exact same lines in the .htaccess file that I have in localhost:3000/ I have also uncommented the AllowOverride all in httpd.conf. The are no errors in apache's error log. When I access the direct link to test.cgi a new line is appended to apache's access log, when I access the script through the symbolic link (and it is rendered as text), there is no line appended to the access log. Any idea why this error occurs? This setup worked fine in a previous version of rails of OS X, but recently I upgraded to Mavericks and figured I should update the Rails application to v4.0 as well.

    Read the article

  • Images from remote source - is it possible or is it bad practice?

    - by user1620696
    I'm building a management system for websites and I had an idea related to image galleries. I'm not sure it's a good approach. Since image space needed is related to how many images a user might upload, I thought about using cloud services like DropBox, Mega, and Google Drive to store images and load then when needed. The obvious problem with this approach is that downloading the images from the 3rd party service would hamper the user experience due to the increased download times. Is there any way to save images belonging in an image gallery on remote source without hampering the user experience because of speed? Or is this approach really not a good practice?

    Read the article

  • I Purchased a Domain that Previously had a Google Apps Account. How Do I Re-Create The Google Apps Account or Take Ownership of it?

    - by jmort253
    I recently purchased a domain name. We'll call it example.com. The previous owner of the domain had a Google Apps account. Now that I own the domain, I want to create a Google Apps account so I can point the domain www.example.com to one of my Google App Engine domains. We'll call it application.appspot.com. Google App Engine won't allow me to add the domain without verifying ownership by creating a Google Apps account or logging into Google Apps, but I don't have access to the old Google Apps account. We've tried going to this address to take ownership: https://www.google.com/a/cpanel/example.com/ResetAdminPassword?c=LONG_KEY&hl=en_US We retrieved a new password, but it wouldn't tell us what the login name is. How do you find out the login name?

    Read the article

  • wordpress is truncating and replacing category name and description in version 2.7.1 [migrated]

    - by Jayapal Chandran
    version: WordPress 2.7.1 I recently created a new category in a blog by wordpress. I created a long category name like win32 api programming and the description as windows api programming and win23api programming. But after saving it the name and the description changes to name: win32 api instead of win32 api programming and desc: Win32 api snippets I don't want to upgrade my wordpress because i dont like some new features in the new releases. what should i do to get my actual strings(names) intact?

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • Can I use a 302 redirect to serve up static content from a url with escaped_fragment?

    - by Starfs
    We would like to serve up seo-friendly ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the htaccess file, that takes the '?_escaped_fragment=' string and send that to a static page? For example /snapshot/yourfilename/ How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ url to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • How long does it take for Google Webmasters to index site after submitting sitemap? [closed]

    - by Venkatesh Hodavdekar
    Possible Duplicate: Why isn't my website in Google search results? I have submitted my website today into Google search using Google Webmasters using sitemaps. The status on the sitemap says OK and it shows that 12 urls have been recognized. I was wondering how long does it take for the link to get indexed, as the indexed url option says "No data available. Please check back soon." I am not sure if it is showing this message due to some error, or everything is fine.

    Read the article

  • How to target just one search engine and optimise for that

    - by mickburkejnr
    I've been dabbling with SEO a lot in the last 6 months, and one thing that has surprised me is the disparity between Google and Bing in the way they deliver results. A website ranked for a specific keyword/phrase on Google may rank 3rd on the first page, but using the same keyword/phrase on Bing will display the same website but ranked 15th for the exact same keyword/phrase. I came up with the idea to increase traffic to my website by targetting Bing instead of Google for several reasons. The biggest one is that while it's not the biggest search provider, people still use it, and I feel that if other websites have been "neglected" and not optimised for Bing my website would stand a better chance of getting near the top of their search rankings. The question is though how would I do this? A lot of the SEO advice on the internet is generic, but I can't help feeling it's Google orientated for obvious reasons. How could I optimise my website to be Bing friendly, rather than Google friendly? I know it sounds like suicide as I'm taking myself out of the Google mindset, but I feel it could work wonders for traffic to the site.

    Read the article

  • over reporting in google analytics - social media stats

    - by colmcq
    I have a client and their traffic from social media reads thus: 80% from facebook 1% from Twitter This suggestsd they are not exploiting twitter at all and this was in my presentation but my boss took it out claiming twitter stats are under reported in google analytics. I can't substantiate this claim and wonder where she got this idea from. Can anyone shed light on this? Are my stats wrong and should I disregard these figures? but 80-1 seems like one hell of an under report! thanks c

    Read the article

  • How can I stop a bot attack on my site?

    - by tnorthcutt
    I have a site (built with wordpress) that is currently under a bot attack (as best I can tell). A file is being requested over and over, and the referrer is (almost every time) turkyoutube.org/player/player.swf. The file being requested is deep within my theme files, and is always followed by "?v=" and a long string (i.e. r.php?v=Wby02FlVyms&title=izlesen.tk_Wby02FlVyms&toke). I've tried setting an .htaccess rule for that referrer, which seems to work, except that now my 404 page is being loaded over and over, which is still using lots of bandwidth. Is there a way to create an .htaccess rule that requires no bandwidth usage on my part? I also tried creating a robots.txt file, but the attack seems to be ignoring that. #This is the relevant part of the .htaccess file: RewriteCond %{HTTP_REFERER} turkyoutube\.org [NC] RewriteRule .* - [F]

    Read the article

  • What measures can be taken to increase Google indexing speed for a given newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. How do I get important-page.html indexed as soon as possible after 12:00? Ideally within seconds or minutes. Or more generally: What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • How to find first cached date and time of a website?

    - by John Sanjay
    I found some of my webpage contents has been copied in other website. I need to check the cache date and time of both pages so that i can compare and guess which page is original and which one is duplicate according to Google. Because i know that if Google found two webpages with same content, it consider the first cached page by Google bot as unique. Is there any online tool for checking that? or other ways?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >