Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 152/216 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Domain held at one registry, need to redirect to subdomain on my own hosting provider

    - by Glinkot
    Hi, There are lots of questions around this general area, but I haven't seen one that exactly mirrors what I'd like to know. It's per the title really. My understanding (and what I'm told by my host) is the easiest thing is just to get the transfer key and bring the DNS across to my own hosting provider. Also I'm told by my host this doesn't affect the client's ownership of the domain itself. Basically, I have a subdomain setup with the site (this has the same IP address as the top level domain). So presumably just giving the other registrar that IP address will only refer it only to the TLD rather than the subdomain. What's the easiest way to achieve this? It's an asp.net site, I don't have a hosted directory on the client's account where I can code a redirect. Thanks all Mark

    Read the article

  • Tracking users behaviour - with or without Google Analytics

    - by Ilian Iliev
    If I understand correctly the following (point & from GA TOS): PRIVACY . You will not (and will not allow any third party to) use the Service to track or collect personally identifiable information of Internet users, nor will You (or will You allow any third party to) associate any data gathered from Your website(s) (or such third parties' website(s)) with any personally identifying information from any source as part of Your use (or such third parties' use) of the Service. You will have and abide by an appropriate privacy policy and will comply with all applicable laws relating to the collection of information from visitors to Your websites. You must post a privacy policy and that policy must provide notice of your use of a cookie that collects anonymous traffic data. You are not allowed to use custom variables that will identify the visitor(for example website username, e-mail, id etc.) So the question is how can I track a specific user behaviour(for example the actions that every single logged in user do).

    Read the article

  • Private domain purchase with paypal: how to prevent fraud?

    - by whamsicore
    I am finally going to buy a domain I have been looking at. The domain owner wants me to give him my Godaddy account information and send him the payment via Paypal gift, so that there will be no extra charges. Should this cause suspicion? Does Paypal offer any kind of fraud protection? What is the best way to protect myself from fraud in this situation, without the need for escrow services, such as escrow.com? Any advice welcomed. Thanks.

    Read the article

  • Add rules (filters) to ftp programs to avoid uploading certain files/folders

    - by guisasso
    i use Filezilla as my ftp client, but this question goes to any other client that could be useful. Can i (in any client) add rules (filters) to an ftp program to avoid upload of certain files or folders? For example: Expression web creates those annoying _vti_cnf folders, or, certain folders in which i have the original version of a picture without a watermark that i don't want to upload. Example, i have a folder A, that has sub folders "original" and "current", i would like to add a filter, so every time i select A to be uploaded, "original" wouldn't go, but "current would".

    Read the article

  • How does Lastpass recognize actual login?

    - by Pan.student
    We are currently working on simple school project using Codeigniter where we need login page. It would be very useful if Lastpass could recognise and save logins. We have several accounts with different roles and manual insert of login is pretty slow. So I was wondering what needs to be done and where in files (view, controller?) for Lastpass to work as it does on every website. For example this is our login form: <?php echo form_open('login'); ?> <input type="text" id="username" name="username"/> <input type="text" id="password" name="password"/> <input type="submit" value="Login"/> </form> Thanks for help. (could not create new tag "Lastpass" due to low reputation) [SOLVED] changed <input type="text" id="password" name="password"/> to <input type="password" id="password" name="password"/>

    Read the article

  • Rewrite img and link paths with htaccess and serve the file from rewritten path?

    - by frequent
    I have a static mockup page, which I want to "customize" by switching a variable used in image-src and link-href attributes. Paths will look like this: <img src="/some/where/VARIABLE/img/1.jpg" alt="" /> <link rel="some" href="/some/where/VARIABLE/stuff/foo.bar" /> I'm setting a cookie with the VARIABLE value on the preceding page and now want to modfiy the paths accordingly by replacing VARIABLE with the cookie value. I'm a htaccess newbie. This is what I have (doesn't work): <IfModule mod_rewrite.c> # get cookie value cookie RewriteCond %{HTTP_COOKIE} client=([^;]*) # rewrite/redirect to correct file RewriteRule ^/VARIABLE/(.+)$ /%1/$1 [L] </IfModule> So I thought my first line gets the cookie value and stores this in %1. And on the second line I'm filtering VARIABLE, replace it with the cookie value and whatever comes after VARIABLE in $1. Thanks for sheeding some light on what I'm doing, doing wrong and if I can do this at all using htaccess. EDIT: I'm sort of halfway through, but it's still not working... Mabye someone can apply the finishing touches: <IfModule mod_rewrite.c> # check for client cookie RewriteCond %{HTTP_COOKIE} (?:^|;\s*)client=([^;]*) # check if an image was requested RewriteCond %{REQUEST_FILENAME} \.(jpe?g|gif|bmp|png)$ # exclude these folders RewriteCond %{REQUEST_URI} !some/members/logos # grab everything before the variable folder and everything afterwards # replace this with first bracket/cookie_value/second bracket RewriteRule (^.+)/VARIABLE/(.+)$ $1/%1/$2 [L] </IfModule> Still can't get it to work, but I think this is the correct way of doing it. Thanks for help!

    Read the article

  • Browser window size statistics?

    - by Litso
    I was wondering, are there any statistics available on what size users have their browser set to nowadays? I know the screen resolutions (we have analytics, which shows those as well) but I doubt a lot of people with 1280*xxx and higher still browse full-screen though. My boss is determined to keep our website 900px wide though, because that way people with 1800*xxx resolutions can have two browser windows next to eachother without having to scroll horizontally. I have never seen anyone browse with two adjacent browser windows like that except here at my current job, so I'm kind of doubting whether this is the best decision or just his personal preference. Anyone that can help out here?

    Read the article

  • What is the best way to launch a website to make it go viral?

    - by Talvi Watia
    Say for example I have a website that has finished BETA and is ready for full launch. I'm at the stage now I want it to be very visible to start getting traffic. Right now, I am just getting some random search engine hits, word of mouth and a few referrals. How do I break out of this and really get some serious traffic to my site? Do I need to launch a huge ad-campaign? If so, what is the best/most cost effective way to do this? I have tried google adwords but at $0.45+ a click this jsut isn't practical for what I have.

    Read the article

  • Rendering citations and references in HTML using PHP/Perl/Python/

    - by Nick
    Is there a PHP/Perl/Python/... library for picking citations out of an HTML file and rendering a nice list of references at the bottom, like in Wikipedia? I'm developing a website with heavily-sourced content, and I'd really like to have automatically-generated lists of formatted references, like in Wikipedia. (Check out their philosophy page, and see how the superscript numbered citations interact with the references at the bottom. This is all dynamically generated, automatically ordered & linked.) They do it really well: the citations are linked to the references (which are backlinked to the citations), when you click on one of the links, the target is highlighted, etc. I'm tempted to build the site on MediaWiki just for this one feature, but it seems like overkill. Do I have any options?

    Read the article

  • How can I compile an IP address to country lookup database to make available for free?

    - by Nick
    How would I go about compiling an accurate database of IP addresses and their related countries to make available as an open source download for any web developer who wants to perform a geographic IP lookup? It seems that a company called MaxMind has a monopoly on geographic IP data, because most online tutorials I've seen for country lookups based on IP addresses start by suggesting a subscription to MaxMind's paid service (or their less accurate free 'Lite' version). I'm not completely averse to paying for their solution or using the free one, but the concept of an accurate open source equivalent that anyone can use without restriction appeals to me, and I think it would be useful for the web development community. How is geographic IP data collected, and how realistic is it to hope to maintain an up-to-date open version?

    Read the article

  • Random links seo and spam

    - by DoesNotCompute
    I built a mini-forum with social features for a client, to promote user registration, i planned to add a box on the forum pages to display pictures with profile links of random registered users. I managed to make this random selection static for a day, i mean the list will be renewed each day and not change on every page refresh. Could this random list of link could be harmful to seo by being considered as some kind of spam?

    Read the article

  • How to identify the client is a search robot?

    - by Yau Leung
    I have built my entire site using AJAX (indeed it's GWT). I have also implemented AJAX crawling proposed by Google. However, after the implementation, I found that neither Yahoo , Bing, nor Baidu implemented that scheme! I'm wondering if there is a way to identify the web client is a search robot. If they are, they will be shown the HTML snapshot I created. It will be best if I can identify them in APACHE level, then I can just do a mod_rewrite. But it's still ok if I can do that in PHP or GWT.

    Read the article

  • Creating an online community - use templates or self-develop?

    - by ican ican
    PHPMotion, Joomla or develop my own? I'm thinking of developing a common interest online community. It will be have UGC, stats, etc.. functionality, and perhaps an online store. Though cost is an issue at this time, I want to be professional and effective. Should I use existing free platform templates, like PHP, Joomla, or should I develop my own? What are the advantages/disadvantages of either option? As a rough estimate, how much will it cost me to develop and manage my own? And how long will it take. In general what should I be careful about on this journey?

    Read the article

  • Awesome WordPress modular theme builder?

    - by Matt M.
    I came across a really neat modular theme for WordPress a while ago, but I can't remember the name so I'm wondering if somebody can help out. It was a paid theme with a dedicated company behind it if I'm not mistaken. The theme allowed you to change the colors of all the content areas, define widgets, and resize content areas. I'm really interested in being able to resize columns and such at the touch of a button, getting too old for the usual CSS micromanagement that goes on with brittle themes.

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • Should I have link rel=next & prev on URLs which have query variables?

    - by user21100
    For example, I have link rel prev & next set up on these pages of products: site.com?page=2 site.com?page=3 (this is my preferred structure by the way and I'm trying to get all the ugly URLs which are littered with query variables deindexed as they are causing duplicate content). So the above URLs are fine but once a filter to narrow product results is selected, like "price", the URL shows like this: site.com?price[1000-1499]=on site.com?page=2&price[1000-1499]=on As of right now, I am having the link rel prev & next dynamically added to the header of these pages but since I am working on getting these query variable URLs pages deindexed, I am wondering if I should get rid of it on these pages? Any thoughts?

    Read the article

  • Adding a list of "recent articles" affects SEO

    - by Groo
    We have a site which has a sidebar with sections (or "widgets" if you like) showing stuff like "Recent Articles", "Other Articles by this User", "Similar Articles" etc. The issue is, Google seems to take these links very seriously. In fact, if I have only a single article which is closely related to a certain phrase (and several other pages link to it in their sidebars), when I do a Google search, it lists all those other pages highlighting that one link to the page that should actually be the most relevant one. And these pages don't even mention the phrase anywhere else. It there a common approach with adding these sidebar links? For example, I might add them through ajax after the page is loaded, but then crawlers will have harder time finding them?

    Read the article

  • Homepage issue on Google [closed]

    - by nico
    We have recently updated our website www.blinds4uk.co.uk with a new homepage containing additional features and more on page content but since then we have lost primary keyword positions and the home page has disappeared completely. The only time it appears is for an exact search ‘blinds4uk’. Today I took snippets of 'unique content' from the homepage and put this into the 'google search' but our homepage was nowhere to be found. When I did the same in ‘Yahoo’ the homepage came up. Are we missing something? We ranked in the top 4 for primary kw terms 'blinds uk' & 'uk blinds' but now we don’t show anywhere for these terms. Our homepage has never ranked well for the primary kw 'blinds' yet our internal pages rank very well with many pages on page 1 of google uk. We employed an SEO firm for 9 months to help us establish issues with the homepage but they never could, so we got rid. We have been trying to get to the root cause of why the homepage ranks so poorly for a number of years and only yesterday we established that we had the meta tag directly below the tag and our title & meta description were further down the page; we have today corrected. Not sure what effect this would have on the way Google reads the homepage but we are trying everything to try and get the homepage ranking fro those primary kw's. Our current developers & ex SEO guys are all part of the same company and cannot pin-point anything other than saying carryon with their SEO team because it will take time but just comes across as a milking exercise. Another thing which I have found very strange is the data from our 'traffic audience'. We are a UK based website yet our traffic stats were showing as;- UK 36.6%, Denmark 35.8% and India 27.6%. – don’t make sense to me! Is there anybody out there that could simply point us in the right direction to the problem(s), so we can fix once and for all? Could there be anything within the code that is causing the home page not to display within google for our primary kw's terms such as blinds, window blinds etc. I would appreciate any advice at all that may help us in our quest to sort this homepage issue once and for all

    Read the article

  • What are the ways to get included in Google Alerts mails

    - by stacker
    There is a news website / blog - I'll call it the Site from now on. The site mention a keyword in one of its articles. I add a Google Alerts for the keyword, but I do not get any alert for the word from The Site. Furthermore, searching the keyword in the news section don't getting any results from The Site. The Site has a sitemap, that indexed by Google every x minutes. And in the main search, searching for a news title bring the item on the top. What are the best ways to get included in Google Alerts mails?

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • Social is the new search

    - by John Flyn
    before you read my question, I want to let you know that you have the freedom to delete or migrate this question. I know it probably doest not belog here, but I could not find any stackexchange site that I could ask this. ok. So, what does this "social is the new search" mean from a webmaster/dev standpoint? Does it even make sense? Or is is just some marketing BS. I'm trying to wrap my head around this, and it is just not clicking. And hearing the word "social" makes me wanna throwup sometimes. thanks in advance.

    Read the article

  • Does Fetch as Googlebot still support their ajax-crawling proposal?

    - by Gunchars
    I spent half a day implementing the server side html generation for modal pages based on their proposal (link), but it seems like the Fetch as Googlebot functionality in Webmaster tools completely ignores the URL fragment. I've verified that the _escaped_fragment_ functionality is working on my server (example), but when I submit a URL like /#!/recipes, the Googlebot just fetches /. There aren't any recent confirmations that it's working and, honestly, it wouldn't surprise me if they just silently dropped the functionality without even editing the docs.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • htaccess url rewrite

    - by user761396
    i used to have a rewrite rule to RewriteRule ^([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] now, i have to change to RewriteRule ^1/([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^1/([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] RewriteRule ^2/([^/]+)\.htm$ index2.php?c=$1 [NC] RewriteRule ^2/([^/]+)\.htm/([0-9.]+)$ index2.php?c=$1&amt=$2 [NC] (the difference is adding a subdirectory) my question is how to redirect my old ones to the 1/ subdirectory? thank you

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >