Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 208/389 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Affiliate programs offering redirects back to my site after successful conversion

    - by Andy
    I am building a website where members are rewarded for actions completed. For some of these actions within my site (such as uploading a photo), I would like to offer my members the ability to participate in affiliate sites as follows: Once they complete the required action, they are rewarded on my site. Ideally, I would pass in the redirect url when sending the member to their site. Are there affiliate programs that offer to redirect back to your site?

    Read the article

  • My website not getting any traffic, How to get traffic? [closed]

    - by Divyanshu Negi
    Possible Duplicate: How can I increase the traffic to my site? I have done pretty nice SEO of my website , the website is made in php , anyone can submit article and the submitted articles are moderated by the moderators , the site is online from more then a month but still the user count is 10-20 only total impressions are 700 according to webmaster google , how much time does google webmaster takes to refresh the data , cause from 3 -4 days the impressions shown in the dashboard are 700 only , I am posting 2 article each day , please help me , I am very disappointed with all my effort and i really need a good motivation to carry on my work. Please help my website url is http://www.viewloud.com

    Read the article

  • Google indexed my home page incorrectly: How can I fix it?

    - by louis_coetzee
    I finished my website and launched it, I think I had a problem with my robots.txt - so I changed it to look like this: 03/08/2012 # Allows all bots Sitemap: http://www.mysite.co.za/sitemap.xml User-agent: * Disallow: /dashboard/ When I google my domain.co.za - I get this back: Home A description for this result is not available because of this site's robots.txt – learn more. You've visited this page 3 times. Last visit: 2012/08/15 Now since I fixed this and added a 301 redirect to redirect mysite.co.za to www.mysite.co.za I would love it if google bot would come do a visit. Is there anything I can do to get this fixed?

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • Cannot submit change of address to subdomain in Google Webmaster Tools?

    - by RCNeil
    I am pointing several domains to one URL, a URL which happens to include a subdomain. ALL of the domains are using 301 redirects to point to this new address. One of the older domains (which used to be a site) is a 'property' in Webmaster Tools, as is the new site (the one with the subdomain.) When registering a 'Change of Address' for the old site with WebmasterTools, it suggests the following method - Set up your content on your new domain. (done) Redirect content from your old site using 301 redirects. (done) Add and verify your new site to Webmaster Tools. (done) Then, directly below that, to proceed, it says Tell us the URL of your new domain: Your account doesn't contain any sites we can use for a change of address. Add and verify the new site, then try again. I have already submitted and verified the new site. The only reason I can fathom I am getting this error is because the new site includes a subdomain. Although I don't foresee getting punished for this, as I am correctly 301 redirecting traffic anyway, I'm curious as to why the Change of Address submission isn't working appropriately for me. Has anyone else had experience with this?

    Read the article

  • Upload image file: is compression on client side already possible?

    - by Chris
    When offering photo file uploading, usually the user will have badly compressed and huge (10+ megapixels) JPEG files from their cameras or phones. On the server side, these files will get re-compressed to something like 800x600px and JPEG quality 7 or 8. Is it (already) possible to do that re-compression on the client side? So that I would only need to transmit some 100kB (800x600px) and not 3 MB or more. Something like: (1) With javascript's new FileSystem API ( http://slides.html5rocks.com/#filewriter ) it would be possible to read the photo file's data into client side JS. (2) Then it would be necessary to re-encode the JPEG data, which is possible, but I counld not find any library for that (yet). Anybody knows such a library? (3) Last step would be to POST the re-compressed JPEG data to the server side for storage and get a URL to the stored photo file back from the server for inclusion into the client's HTML. I am looking for some jQuery plugin, other JS library or example web page that does this.

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

  • Making a language switch main menu button in Drupal

    - by Let_Me_Be
    I have a bilingual site in Drupal. The problem is that I hate the language switch block taking up so much space (sometimes the only thing in the sidebar is the language switch block). So what I would love to have is language switch menu item, that would point to the other language (other then the current one). Something like this: | Home | Projects | BlaBla | | Cesky | after swith: | Domu | Projekty | Blabla | | English | Is that possible without writing a whole new module?

    Read the article

  • What are the recommended minimum payment options on an e-commerce site?

    - by Mantorok
    I've recently released a site that presently integrates with PayPal for taking payments, this doesn't require you to hold a PayPal account as you can submit credit cards through the PayPal checkout without having to sign-up etc. But what other options would you say were recommended or perhaps even required to ensure you capture as many potential customers as possible? EDIT: We accept payments worldwide by the way.

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • Linking competitor with the same keyword i am targeting : Good or Bad for Seo?

    - by Badal Surana
    i am linking one of my competitors from my site for the same keyword which is i am targeting for my site.(My competitor is paying me for that) For Example: Me and my competitor both are targeting on keyword "foo" and my competitor paying me for linking his site from my site with keyword "foo" What i want to know is if i do that will my site's position go down in Google search results? or it will make no difference??

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • How to tackle archived who-is personal data with opt-out?

    - by defaye
    As far as I understand it, it is possible to opt-out (in the UK at least) of having your address details displayed on who-is information of a domain for non-trading individuals. What I want to know is, after opt-out, how do individuals combat archived data? Is there any enforcement of this? How many who-is websites are there which archive data and what rights do we have to force them to remove that data without paying absurd fees? In the case of capitulating to these scoundrels, what point is it in paying for the removal of archived data if that data can presumably resurface on another who-is repository? In other words, what strategy is one supposed to take, besides being wiser after the fact?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • SEO and JavaScript since Google admits JS parsing

    - by schlingel
    We're planning on building a HTML snapshot creation service to provide the Google crawlers with static HTML of our JS driven single page application. Is this still necessary and/or encouraged since Google openly admits it is parsing JS now? How should I tackle this evaluation? Are there tools to provide data on when it's needed to provide snapshots and when google has sufficent parsing? Is it better because it would be much faster in comparison to the JS incremental rendering?

    Read the article

  • Problems using PHP on a Plesk Windows dedicated server

    - by wpsmdouble
    I'm having a few issues with my dedicated Windows server that has a Plesk panel: 1.) I can't send e-mail via PHP's mail() function, it always returns the following error: SMTP server response: 550 Requested action not taken: mailbox unavailable or not local Which setting should I alter in order to enable sending mail? 2.) The exec() and passthru() PHP functions don't work; their outputs are blank even when I try to run simple commands such as dir. Is there an option that I can toggle in the control panel to enable this functionality?

    Read the article

  • Embeding a generic google search with autocomplete - not a custom site search

    - by picxelplay
    Most people's home page is google.com. My homepage is just a custom html page hosted on my computer. I do this because I am a web developer, and I have several projects that I work on a one time, so I like to have quick links to all of them. On that page I usually just have a Link to google.com for when I want to search. But below all of my quick links, I want to add a google search box (with Autocompletions). I first used a simple iframe to embed google.com into the page, but then my search results were confined to that iframe. I wanted to search for something, then my results would open in a new tab. I then came across this code snippet but it doesn't have Autocompletions: http://www.refactory.org/s/google_search/view/2 How can I add Autocompletions to this? Or is there a better way of doing it? Thanks in advance for any advice

    Read the article

  • schema.org specification for generic pages or posts on a CMS

    - by NateWr
    I'm trying to determine the best possible schema.org type to declare for the content section in the template of a content management system, which will handle regular news posts for small, local hospitality businesses. The type should represent the content of that page, which is likely to be a wide range of things. The description for Article pretty strongly encourages its use to be limited to the articles of a publication. For purely semantic reasons, I'm not sure if Blog is appropriate in this case -- businesses won't be creating typical "blog" content but are more likely to be writing about upcoming events, special deals, awards, etc. Would Webpage be appropriate in this instance? Although I'm a fan of the schema.org concept, I frequently find myself unsure how broadly or narrowly I'm meant to infer the meaning of a type. In such cases, is it safe to use a high-level element, such as CreativeWork, or does this blunt the usefulness of the markup?

    Read the article

  • How to determine if someone is accessing our database remotely?

    - by Vednor
    I own a content publishing website developed using CakePHP(tm) v 2.1.2 and 5.1.63 MySQL. It was developed by a freelance developer who kept remote access to the database which I wasn’t aware of. One day he accessed to the site and overwrote all the data. After the attack, my hosting provider disabled the remote access to our database and changed the password. But somehow he accessed the site database again and overwrote some information. We’ve managed to stop the attack second time by taking the site down immediately. But now we’re suspecting that he’ll attack again. What we could identified that he’s running a query and changing every information from the database in matter of a sec. Is there any possible way to detect the way he’s accessing our database without remote access or knowing our Cpanel password? Or to identify whether he has left something inside the site that granting him access to our database?

    Read the article

  • Domain mapping issues

    - by Nadya
    I have two domain names - .com & .co.uk bought with 123-reg and just one student Windows hosting pack associated with the .co.uk domain. The .com domain is the main one which people would be trying to access, so I just mapped the domain to the hosting this morning. The problem is that I would really like it to be functional by tomorrow morning and the usual waiting time is 24-48 hours. Is there point in stopping the process and trying with forward it with CNAME record instead, does it take less time? (I can just go back and do proper domain mapping during the weekend) Also, is there a possible way to check whether the domain mapping has been done correctly before these 24-48 hours? From some computers I get 404 Error on homepage.

    Read the article

  • htaccess and htpasswd trouble

    - by hjpotter92
    This is the first time that I have ever tried working with .htpasswd and .htaccess files, so please point out my childish works. I have my apache document root set to /www/ on my debian server. Inside it, there's a folder named Logs/ which I want to restrict access using a htpasswd. I created my htpasswd file using the shell's htpasswd command. And this is the result: > user:<encoded password here> > hjp:<encoded password here> > hjpotter92:<encoded password here> I put this file named .htaccess inside /www/. The Logs/ has following htaccess file in it: > AuthName "Restricted Area" > AuthType Basic > AuthUserFile /www/.htpasswd > AuthGroupFile /dev/null > require valid-user This was again created using an online tool(I forgot its name/link, and can't search the browser-history now). The problem, as it might've already struck you is that I am experiencing no change on my Logs folder access. The folder is still accessible to everyone. I am running apache as root user(if that matters/helps). Please help/guide me. I've tried reading some htaccess guides and have followed some of older SO questions, but still haven't figured out a way to restrict access to Logs folder with a password.

    Read the article

  • When tracking which elements were clicked e.target.id is sometimes empty [migrated]

    - by Ivan
    I am trying to test the following JavaScript code, which is meant to keep track of the timing of user responses on a multiple choice survey: document.onclick = function(e) { var event = e || window.event; var target = e.target || e.srcElement; //time tracking var ClickTrackDate = new Date; var ClickData = ""; ClickData = target.id + "=" + ClickTrackDate.getUTCHours() + ":" + ClickTrackDate.getUTCMinutes() + ":" + ClickTrackDate.getUTCSeconds() +";"; document.getElementById("txtTest").value += ClickData; alert(target.id); // for testing } Usually target.id equals to the the id of the clicked element, as you would expect, but sometimes target.id is empty, seemingly at random, any ideas?

    Read the article

  • Links in my site have been hacked

    - by Funky
    In my site I prefix the images and links with the domain of the site for better SEO using the code below: public static string GetHTTPHost() { string host = ""; if (HttpContext.Current.Request["HTTP_HOST"] != null) host = HttpContext.Current.Request["HTTP_HOST"]; if (host == "site.co.uk" || host == "site.com") { return "http://www." + host; } return "http://"+ host; } This works great, but for some reason, lots of links have now changed to http://www.baidu.com/... There is no sign of this in any of the code or project, the files on the server also have a change date when i last did the publish at 11 yesterday, so all the files on there look fine. I am using ASP.net and Umbraco 4.7.2 Does anyone have any ideas? thanks

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >