Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 79/216 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • two <select> always next to each other inseide <td> ? [closed]

    - by Radek
    I have to selects inside td and I want to make sure that they are next to each other at all times but td's width is width of these two selects. Not more. The thing is that value to be displayd in selects changes based on data. <td> <select name="db2.rfthdd"> <option value="WEI">WEI</option> <option value="SCOTSdatabase">SCOTSdatabase</option> </select> <select id="db2.rfttimestamp"> <option value="20110302122831">2011-03-02-122831</option> <option value="20110302122442">2011-03-02-122442</option> </select> </td>

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software? Thanks.

    Read the article

  • A lot of "(direct) / (none)" traffic in Google Analytics

    - by Yoga
    my web site has a lot of "(direct) / (none)" traffic (over 50%) in Google Analytics, but under the "Audience", 100% are new visitors, why is that? I am quite sure most of the Audience should be new visitor, but why so many "(direct) / (none)" traffic? Update: Actually we have launch a new site which this number drop significantly, so I am interested in knowing why the number was so high in the past.

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • How to correctly track the analytics when using iframe

    - by Sherry Ann Hernandez
    In our main aspx page we have this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'florahospitality.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); _gaq.push(function() { var pageTracker = _gat._getTrackerByName(); var iframe = document.getElementById('reservationFrame'); iframe.src = pageTracker._getLinkerUrl('https://reservations.synxis.com/xbe/rez.aspx?Hotel=15159&template=flex&shell=flex&Chain=5375&locale=en&arrive=11/12/2012&depart=11/13/2012&adult=2&child=0&rooms=1&start=availresults&iata=&promo=&group='); }); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Then inside this aspx page is an iframe. Inside the iframe we setup this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'reservations.synxis.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview', 'AvailabilityResults']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The problem is I see to pageview when I go to find the AvailabilityResults page. The first one is a direct traffic and the other one is a cpc. How come that they have different source? I was expecting that both of them is using a direct traffic.

    Read the article

  • What are some client-side sitemap generators?

    - by BHare
    Most of the sitemap generators I've found all scan my internal files and base it on that. However using apache and htaccess I have many aliases for things for example: /brand/model/photos/ is empty directory wise, but on the web it relocates it to a zenphoto gallery using htaccess. http://www.xml-sitemaps.com/ does What I need, but I'd like to have more control over it and I need it to be dynamic and run weekly or daily. Allow for specific change frequencies based on the file/directory. I also want to be able to make a "pretty" sitemap html file. Counting the photo gallery I have around 300 links, without the photo gallery more like 100. I have MySQL, PHP 5.3 installed.

    Read the article

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • how to localize new posts in asp.net...?? [closed]

    - by ntechi
    I am doing my final year project and have decided to make a website in asp.net. For that I'll be using Micrsoft Visual Studio 2008. I'm making a Real ESTATE properties website. I want to know how to localize or create new posts in asp.net( like in WORDPRESS) and also when I hit SEARCH it should search for the desired keyword or the searched post. If post is not possible then it should display pages...

    Read the article

  • WordPress bot issues

    - by Paul
    I need to implement a blog into a clients site as he is unhappy with his current basic CMS driven solution. It needs to suit both seo and the current style and as I'm a front end dev/designer and they don't have budget to redevelop - the only solution I can think of is to setup a Wordpress blog and restyle to suit. My only worry about this is the current press reports on WordPress being affected by webbots. I understand the main worry Is if you use an id of admin, but I'm concerned that regardless of this the site could be bombarded with bot requests and cause timeouts! Is this valid? If so is there any way to avoid this issue!? If not can anyone recommend another good SEO friendly blog solution!?

    Read the article

  • Redirect Permanent and https

    - by Clem
    I just set up https on my server, and I have an issue with redirect permanent. If I have a link for example http://domain.com/index.html it redirect me on https://www.domain.comindex.html The / is missing and I can't figure out how to fix it. It's work with http://www.domain.com/index.html Here is my httpd.conf <VirtualHost *:80> ServerName domain.com Redirect permanent / https://www.domain.com/ </VirtualHost> <VirtualHost *:80> ServerName www.domain.com Redirect permanent / https://www.domain.com/ </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/domain/ ServerName www.domain.com SSLEngine on SSLCertificateFile ssl.crt SSLCertificateKeyFile ssl.key </VirtualHost>

    Read the article

  • Cloud hosting vs dedicated hosting: advantages and disadvantages

    - by bcmcfc
    I'm currently looking for a hosting company that can provide a very solid service with a 100% SLA. In the search both cloud hosting and managed dedicated hosting have come up. (I'd rather not manage the server myself as I'm still rather new to Linux.) I'm not sure if phrasing this as a "which is best" would make sense, but what advantages does cloud hosting have over dedicated server hosting? I need a reliable service above all else, and some elements of the application to be hosted will be relatively CPU intensive, although those spikes in CPU usage will be sporadic, so the hosting needs to be able to deal with that.

    Read the article

  • Duplicate content in Top Level Domain and country specific website

    - by Ando
    I have myproduct.com which is my master product page. For UK I also own myproduct.co.uk which is a copy of myproduct.com with some localized content: landing page, promotions, prices, and specific tags. But there is also duplicate content: myproduct.com/FAQs/ is the same as myproduct.co.uk/FAQs/ I don't want to do a redirect from myproduct.co.uk/FAQs/ to myproduct.com/FAQs/ as I don't want people to leave the localized website. The myproduct.com/FAQs/ is my "go-to" FAQ page and it's the most likely to be up to date - so I want this page to be indexed my search engines, where as I don't care about myproduct.co.uk/FAQs/ being indexed (unless indexing this page would increase my page rank :) ). What to do now to be SEO friendly & SEO optimal? Stop indexing of myproduct.co.uk/FAQs/ via robots.txt? Do some rel="alternate" hreflang="x" configuring on both /FAQs/ page? Something else?

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Web Development Environment: How to distribute edited hosts files over bunch of mac machines?

    - by Alex Reds
    I am doing some research to prepare some web development environment for our small(10ppl and growing) new office. User Case: For each new web project usually we create new alias on an Apache server someproject.companywebsite From my understanding in order to see this website locally for all the rest of our team(including mangers and directors) they will need to edit hosts file (e.g. "192.168.1.10 someproject.companywebsite"), and like that each time for a new project(can be 2-5 each week) Solution: And I looking for a solution how to edit this hosts file only once and distribute it over all mac machines in our network at once or much more flawlessly than poking around with each machine every time over and over again. Is that possible? Or that a very wrong way of doing that? Perhaps we better set up own local dns server and point to it our router? Though own dns server a bit concerns me because of might be some network interruption and others lags, if you know what I mean. Or perhaps there are another workflows for that? What's the best way for such things? So I'll be so grateful to hear some advices from experienced admins. I couldn't find that info on internet, so if you know where to read about it, point me in a right direction. Thank you in advance Alex

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • google analytics reverse transaction not working with sales performance

    - by prasad maganti
    We have google analytics account and trying to do reverse transaction. We have created a transaction on one date and reverse transaction on some other date. After transaction if we do reverse transaction it disappears from transactions list. Is it the expected behavior or abnormal behavior? But, if we check the same order data in sales performance, the reverse transaction does not reflects on when we created the transaction, it reflecting on when we made reverse transaction date. It should not be do like this. The reverse transaction should affect the same date on when we made transaction date.

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Quick Question, robots.txt Disallow: /*/ does what exactly?

    - by Exit
    A SEO firm suggested changing the robots.txt to: User-agent: * Disallow: /*/ Allow: /ims/ I'm not sure what that would do, but my guess is that is would tell all robots to index nothing but the ims folder. I understand the wildcard, but I'm confused by the slashes and don't know how they would play out in conjunction with the wildcard. * Update * I didn't mention that there is a sitemap listed in the robots.txt file, but according to one tech blogger, he realized that sitemaps trump robots exclusions. So, even though this says in Google Webmaster Tools that everything with a trailing slash will not be indexed, the sitemap contains the important links. I did notice that the link count on Google went from 360 to 336, and the sitemap links under the URL scaled back to 3 from 6. I'm not sure the cause or what links were removed, though. Perhaps it cleaned out garbage. I'm still clueless why they would add in 'Allow: /ims/', that seems pointless. And a quick list of what would index according to the robots rules above (withouth the sitemap) using /*/: domain.com Indexed domain.com/page.html Indexed domain.com/folder/ Not Indexed domain.com/folder/page.html Not Indexed

    Read the article

  • Webshop for digital goods with voucher / gift card system [duplicate]

    - by Kelzama
    This question already has an answer here: Which Ecommerce Script Should I Use? 1 answer I'm searching for a webshop which provides the following: The shop offers digital goods (like mp3) User can buy a voucher / gift card @Reseller (Or there is a code provided in the CD) User can enter his code @ webshop and gets the download (unregistered) User can enter his code @ webshop and download is added to his/her library (registered) optional: Resellers can buy codes from the Webshop I already tried prestashop as it looks quite nice. But it needs a lot of custom programming (and has a very strange voucher-system). Customer has to add the File into the basket and add the voucher at checkout. I want to skip that ;) Is there a Webshop (Or CMS + Plugin) which provides the things I need? (it could also be a CMS with a Storage/Folder Plugin (like joomla + K2) and a possibility to activate downloads via unique Codes.) Any ideas are highly appreciated :) Thanks in advance.

    Read the article

  • sku code as description in Google Analytics

    - by dreagan
    In the Google Analytics ecommerce tracing script you must provide for every item and SKU code. I have this code for every product I'm selling and up until now I have always provided it in the _addItem method. But when reviewing that data in the ecommerce module of Google Analytics, I have no real, no readable data about my SKU sales. I know what product has been sold, due to the product name I provide. But when clicking through to the SKU-level, I know nothing more, since all I can see there are SKU codes. Is it possible and wise to replace the SKU code with the following template? "product-name colour-name size-name" This way, it should still be a unique field, but more readable afterwards.

    Read the article

  • Default Wordpress site on IIS

    - by Mike
    We have multiple wordpress installations on our IIS7 (Windows Server 2008) Server as follows: http://www.example.com/site_one http://www.example.com/site_two http://www.example.com/site_three These all work properly. However we would like to configure it so that when users visit the root domain (http://www.example.com/) or any page underneath, ie: http://www.example.com/ http://www.example.com/page1 http://www.example.com/page2 They would actually see the corresponding pages for site_two: http://www.example.com/site_two/ http://www.example.com/site_two/page1 http://www.example.com/site_two/page2 How could we achieve this?

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • Shifting from no-www to www and browsers' password storage

    - by user1444680
    I created a website having user-registration system and invited my friend to join it. I gave him the link of no-www version: http://mydomain.com but now after reading this and this, I want to shift to www.mydomain.com. But there's a problem. I saw that my browser is storing separate passwords for mydomain.com and www.mydomain.com. So in my friend's browser his password must have been stored for no-www. That means after I shift to www and next time he opens the login page, his browser wouldn't auto-fill the username and password fields and there will also be an extra entry (of no-www) in his browser's database of stored passwords. Can this be avoided? Can I do something that will convey to browsers that www.mydomain.com and mydomain.com are the same website? I already have a CNAME record for www pointing to mydomain.com but it seems that search engines consider CNAME as alias but browsers consider them as different websites, I don't know why.

    Read the article

  • grid layout default on wordpress theme

    - by nathan philpott
    I'm having trouble with a multi-layout option on a wordpress theme sight http://sight.wpshower.com/ the traffic have the option of a grid or a list layout at the click of a button. at present the list layout is default. I am interested in making the grid layout default . this is some of the php, i tried simply swapping the word grid for list but although this does work to an extent , if done on the loop.php page it removes the a:hover functions on the post boxes in the grid format. also if done on the index.php it switches buttons on the main index page. any ideas?? loop.php <div id="loop" class="<?php if ($_COOKIE['mode'] == 'grid') echo 'grid'; else echo 'list'; ?> clear"> <?php while ( have_posts() ) : the_post(); ?> <div <?php post_class('post clear'); ?> id="post_<?php the_ID(); ?>"> <?php if ( has_post_thumbnail() ) :?> <a href="<?php the_permalink() ?>" class="thumb"><?php the_post_thumbnail('thumbnail', array( 'alt' => trim(strip_tags( $post->post_title )), 'title' => trim(strip_tags( $post->post_title )), )); ?></a> <?php endif; ?> <div class="post-category"><?php the_category(' / '); ?></div> <h2><a href="<?php the_permalink() ?>"><?php the_title(); ?></a></h2> <!-- <div class="post-meta">by <span class="post-author"><a href="<?php echo get_author_posts_url(get_the_author_meta('ID')); ?>" title="Posts by <?php the_author(); ?>"><?php the_author(); ?></a></span> on <span class="post-date"><?php the_time(__('M j, Y')) ?></span> <em>&bull; </em><?php comments_popup_link(__('No Comments'), __('1 Comment'), __('% Comments'), '', __('Comments Closed')); ?> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> </div> --> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> <div class="post-content"><?php if (function_exists('smart_excerpt')) smart_excerpt(get_the_excerpt(), 55); ?></div> </div> <?php endwhile; ?> </div> <?php endif; ?> index.php <?php get_header(); ?> <div class="content-title"> Projects <a href="javascript: void(0);" id="mode"<?php if ($_COOKIE['mode'] == 'grid') echo ' class="flip"'; ?>></a> </div> <?php query_posts(array( 'post__not_in' => $exl_posts, 'paged' => $paged, ) ); ?> <?php get_template_part('loop'); ?> <?php wp_reset_query(); ?> <?php get_template_part('pagination'); ?> <?php get_footer(); ?>

    Read the article

  • What to do with database in dev/production phases of a website?

    - by TheLQ
    For a while now I've been keeping a website I'm developing in the standard dev/production phases. Its been pretty simple: Mercurial repo for dev, repo for production. Do work in dev, get approved, push to production. But now I'm trying to apply this process to a new website that has a database and am struggling on how to figure out a development strategy. What I didn't mention above is that I do all my work on my own repo, push it to dev, then later push it to production, so its 3 different servers. So how do I manage my database? The obvious solution of mysqldump every commit isn't going to happen, and a dump at the end of the day isn't all that helpful when you want to undo later one change that happened in the middle of the day. What is the best way to accomplish this?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >