Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 115/389 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • What is a good replacement for MS Frontpage?

    - by Clay Nichols
    I've been using MS Frontpage 2003 to maintain our company website for years. Looking for a replacement that can: Import/convert a MS FrontPage website and "modernize it" (clean up the HTML to make it standards compliant, etc.) Supports (or converts) the substitutions (Include Page and Text substitutions that are done when the page is published (so they become static HTML). Leverages my knowledge of FrontPage Looks like the likely contender is Web Expressions but I'm open to objective suggestions.

    Read the article

  • grid layout default on wordpress theme

    - by nathan philpott
    I'm having trouble with a multi-layout option on a wordpress theme sight http://sight.wpshower.com/ the traffic have the option of a grid or a list layout at the click of a button. at present the list layout is default. I am interested in making the grid layout default . this is some of the php, i tried simply swapping the word grid for list but although this does work to an extent , if done on the loop.php page it removes the a:hover functions on the post boxes in the grid format. also if done on the index.php it switches buttons on the main index page. any ideas?? loop.php <div id="loop" class="<?php if ($_COOKIE['mode'] == 'grid') echo 'grid'; else echo 'list'; ?> clear"> <?php while ( have_posts() ) : the_post(); ?> <div <?php post_class('post clear'); ?> id="post_<?php the_ID(); ?>"> <?php if ( has_post_thumbnail() ) :?> <a href="<?php the_permalink() ?>" class="thumb"><?php the_post_thumbnail('thumbnail', array( 'alt' => trim(strip_tags( $post->post_title )), 'title' => trim(strip_tags( $post->post_title )), )); ?></a> <?php endif; ?> <div class="post-category"><?php the_category(' / '); ?></div> <h2><a href="<?php the_permalink() ?>"><?php the_title(); ?></a></h2> <!-- <div class="post-meta">by <span class="post-author"><a href="<?php echo get_author_posts_url(get_the_author_meta('ID')); ?>" title="Posts by <?php the_author(); ?>"><?php the_author(); ?></a></span> on <span class="post-date"><?php the_time(__('M j, Y')) ?></span> <em>&bull; </em><?php comments_popup_link(__('No Comments'), __('1 Comment'), __('% Comments'), '', __('Comments Closed')); ?> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> </div> --> <?php edit_post_link( __( 'Edit entry'), '<em>&bull; </em>'); ?> <div class="post-content"><?php if (function_exists('smart_excerpt')) smart_excerpt(get_the_excerpt(), 55); ?></div> </div> <?php endwhile; ?> </div> <?php endif; ?> index.php <?php get_header(); ?> <div class="content-title"> Projects <a href="javascript: void(0);" id="mode"<?php if ($_COOKIE['mode'] == 'grid') echo ' class="flip"'; ?>></a> </div> <?php query_posts(array( 'post__not_in' => $exl_posts, 'paged' => $paged, ) ); ?> <?php get_template_part('loop'); ?> <?php wp_reset_query(); ?> <?php get_template_part('pagination'); ?> <?php get_footer(); ?>

    Read the article

  • Quick Question, robots.txt Disallow: /*/ does what exactly?

    - by Exit
    A SEO firm suggested changing the robots.txt to: User-agent: * Disallow: /*/ Allow: /ims/ I'm not sure what that would do, but my guess is that is would tell all robots to index nothing but the ims folder. I understand the wildcard, but I'm confused by the slashes and don't know how they would play out in conjunction with the wildcard. * Update * I didn't mention that there is a sitemap listed in the robots.txt file, but according to one tech blogger, he realized that sitemaps trump robots exclusions. So, even though this says in Google Webmaster Tools that everything with a trailing slash will not be indexed, the sitemap contains the important links. I did notice that the link count on Google went from 360 to 336, and the sitemap links under the URL scaled back to 3 from 6. I'm not sure the cause or what links were removed, though. Perhaps it cleaned out garbage. I'm still clueless why they would add in 'Allow: /ims/', that seems pointless. And a quick list of what would index according to the robots rules above (withouth the sitemap) using /*/: domain.com Indexed domain.com/page.html Indexed domain.com/folder/ Not Indexed domain.com/folder/page.html Not Indexed

    Read the article

  • Redirect Permanent and https

    - by Clem
    I just set up https on my server, and I have an issue with redirect permanent. If I have a link for example http://domain.com/index.html it redirect me on https://www.domain.comindex.html The / is missing and I can't figure out how to fix it. It's work with http://www.domain.com/index.html Here is my httpd.conf <VirtualHost *:80> ServerName domain.com Redirect permanent / https://www.domain.com/ </VirtualHost> <VirtualHost *:80> ServerName www.domain.com Redirect permanent / https://www.domain.com/ </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/domain/ ServerName www.domain.com SSLEngine on SSLCertificateFile ssl.crt SSLCertificateKeyFile ssl.key </VirtualHost>

    Read the article

  • Magento error when Fedex shipping from US to Canada

    - by Tony Lush
    When shipping from the US to Canada, the Magento 1.3.x shipping module allows Fedex ground shipping, but should not. International Ground should be the minimum available going to Canada, since Fedex charges a paperwork fee on top of the transportation costs. Is there a way to fix the module, or do we have to resort to removing Canada from Fedex's permitted countries and setting up an entirely different shipping option for Canada?

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by rossmcm
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • Why google is not crawling my website

    - by Aman Virk
    I am running a design and development blog http://www.thetutlage.com/ . From last couple of days my search traffic have been reduced from 70% to 10%. I myself is against black hat seo and all it do is write my own unique content almost everyday. Last week my search traffic was really good but now is dropping like heck. I have checked my webmasters dashboard and no message there from google. When i checked server logs i came to know last time google crawled my website was on 27 september 2012. Really i have no idea what i am doing wrong. I follow all google guidelines like bible, please help me

    Read the article

  • administering new website - Found really tiny keywords inside page

    - by ndefontenay
    I'm administering this new website. The previous web admin included a large amount of tiny keywords on top of some of its pages. I've removed them already. I need to know if I have to rest the domain with google webmaster or will google notice the change and take action? thanks in advance. edit: They are not meta keyword. They are literally text so small that it looks like a fine line of gibberish on the page itself. This clearly violates google guidelines. My point was more: Do I need to tell google that we are not bad pupils anymore.

    Read the article

  • 403 error on index file

    - by John L.
    When I try to access index.py in my server root through http://domain/, I get a 403 Forbidden error, but when I can access it through http://domain/index.py. In my server logs it says "Options ExecCGI is off in this directory: /var/www/index.py". However, my httpd.conf entry for that directory is the same as the ones for other directories, and getting to index.py works fine. My permissions are set to 755 for index.py. I also tried making a php file and naming it index.php, and it works from both domain/ and domain/index.php. Here is my httpd.conf entry: <Directory /var/www> Options Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all AddHandler cgi-script .cgi AddHandler cgi-script .pl AddHandler cgi-script .py Options +ExecCGI DirectoryIndex index.html index.php index.py </Directory> Thanks

    Read the article

  • Duplicate content in Top Level Domain and country specific website

    - by Ando
    I have myproduct.com which is my master product page. For UK I also own myproduct.co.uk which is a copy of myproduct.com with some localized content: landing page, promotions, prices, and specific tags. But there is also duplicate content: myproduct.com/FAQs/ is the same as myproduct.co.uk/FAQs/ I don't want to do a redirect from myproduct.co.uk/FAQs/ to myproduct.com/FAQs/ as I don't want people to leave the localized website. The myproduct.com/FAQs/ is my "go-to" FAQ page and it's the most likely to be up to date - so I want this page to be indexed my search engines, where as I don't care about myproduct.co.uk/FAQs/ being indexed (unless indexing this page would increase my page rank :) ). What to do now to be SEO friendly & SEO optimal? Stop indexing of myproduct.co.uk/FAQs/ via robots.txt? Do some rel="alternate" hreflang="x" configuring on both /FAQs/ page? Something else?

    Read the article

  • How do you set up the directory structure for a multilingual site without duplicating content?

    - by Ricardo
    I want to make a website in two languages. I've looked around and settled on the directory option of separating both languages. How do I make it work? Let's say I have the following three files for the landing homepage, the English page and the Spanish page: http://www.domain.com/index.html http://www.domain.com/en/index.html http://www.domain.com/es/index.html Let's also say that /index.html will be in English, with a link to /es/index.html. In turn, /es/index.html will have a link to the English version. Would this be back to /index.html or to /en/index.html. How do I get both English versions (the one at the root and the one in the directory) to actually be the same file in the same directory? I'm new to this, so I'm not using any scripts yet. To me, the obvious solution is to duplicate both English versions and have the one at the root point to files under the /en/ directory, but I'm not a fan of duplication and I've learned that search engines really frown upon that. Anyone point me in the right direction?

    Read the article

  • Internet Explorer and margins

    - by Hailwood
    Hi there. I have some pretty simple html which is meant to make a layout as below. To push the tabs down from the userbar I am using margin-top: 35px; However in internet explorer the tabs are completly misaligned(the top of the tabs is where the bottom should be). So I need to use margin-top: -50px; for internet explorer. Why is this and how can I fix it without using a ie specific stylesheet <div id="pageHead"> <div id="userBar"> <span class="bold">Hi Matthew Hailwood | <a href="#">Logout</a> </div> <a href="http://localhost/buzz/" id="pageLogo"></a> <div id="pageTabs" class="clearfix"> <ul> <li><a href="http://localhost/buzzil/templates">Templates</a></li> <li><a href="http://localhost/buzzil/messaging">Messaging</a></li> <li><a href="http://localhost/buzzil/contacts">Contacts</a></li> </ul> </div> </div> With the css being #pageHead { height: 100px; } #pageLogo { float: left; width: 149px; height: 77px; margin-top: 11px; background: transparent url('../images/logo.png') no-repeat; } #userBar { text-align: right; color: #fff; margin-top: 10px; } #userBar a:link, #userBar a:visited, #userBar a:active { font-weight: normal; color: #E0B343; text-decoration: none; } .clearfix:after { content: "."; display: block; clear: both; visibility: hidden; line-height: 0; height: 0; } .clearfix { display: inline-block; } html[xmlns] .clearfix { display: block; } * html .clearfix { height: 1%; } #pageTabs { float: right; margin-top: 35px; } #pageTabs ul { position: relative; width: 100%; list-style: none; margin: 0; padding: 0; border-left: 1px solid #000; } #pageTabs ul li { float: right; background: url(../images/tabsBg.png) no-repeat 0% 0%; border-left: 1px solid #000; margin-left: -1px; } #pageTabs ul li a:link, #pageTabs ul li a:visited, #pageTabs ul li a:active { color: #fff; background: url(../images/tabsBg.png) no-repeat 100% 0%; display: block; font-size: 14px; font-weight: bold; line-height: 42px; text-transform: uppercase; padding: 4px 32px; text-decoration: none; } #pageTabs ul li a:hover, #pageTabs ul li a:focus { text-decoration: underline; }

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • How to correctly track the analytics when using iframe

    - by Sherry Ann Hernandez
    In our main aspx page we have this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'florahospitality.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); _gaq.push(function() { var pageTracker = _gat._getTrackerByName(); var iframe = document.getElementById('reservationFrame'); iframe.src = pageTracker._getLinkerUrl('https://reservations.synxis.com/xbe/rez.aspx?Hotel=15159&template=flex&shell=flex&Chain=5375&locale=en&arrive=11/12/2012&depart=11/13/2012&adult=2&child=0&rooms=1&start=availresults&iata=&promo=&group='); }); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Then inside this aspx page is an iframe. Inside the iframe we setup this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'reservations.synxis.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview', 'AvailabilityResults']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The problem is I see to pageview when I go to find the AvailabilityResults page. The first one is a direct traffic and the other one is a cpc. How come that they have different source? I was expecting that both of them is using a direct traffic.

    Read the article

  • How to handle non-existent subdirectories?

    - by Question Overflow
    I have a dynamic website with friendly URLs. Example: Instead of /user.php?id=123, I have /user/123 Instead of /index.php?category=fishes, I have /fishes But, how do I handle non-existent subdirectories such as /about/123? Currently it gives a 200 success instead of a 404 not found error. Is there a way to deal with non-existent subdirectories in Apache config and at the same time allow for friendly URLs? Or do I have to handle this individually for each PHP script?

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Does Bing support anything like Google's First Click Free program?

    - by Dan Fabulich
    Google has a program for webmasters called First Click Free. To implement First Click Free, you need to allow all users who find a document on your site via Google search to see the full text of that document, even if they have not registered or subscribed to see that content. The user's first click to your content area is free. However, once that user clicks a link on the original page, you can require them to sign in or register to read further. The user must be able to see the full content of a multi-page article. You can allow this by displaying all content on a single page to both Googlebot and users. Alternatively, you can use cookies to make sure that a user can visit each page of a multi-page article before being asked for registration or payment. Does Bing support anything like this?

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Uploading a file automatically for speed test?

    - by Abhi
    I am building a Web UI for a device for internet connection and one of the requirements in it is a speed test. I know the basic concept of how speed test works. A file is downloaded for a limited time then the same file is uploaded again and the speed is tracked at regular intervals. Downloading the file is not an issue, but how am I supposed to upload the file without the client knowing that the file is getting uploaded? I've read through a lot of documentation, but I'm still not able to get the answer to how I will upload the file from clients machine without asking him to select the file.

    Read the article

  • Implications on automatically "open" third party domain aliasing to one of my subdomains

    - by Giovanni
    I have a domain, let's call it www.mydomain.com where I have a portal with an active community of users. In this portal users cooperate in a wiki way to build some "kind of software". These software applications can then be run by accessing "public.mydomain.com/softwarename" I then want to let my users run these applications from their own subdomains. I know I can do that by automatically modifying the.htaccess file. This is not a problem. I want to let these users create dns aliases to let them access one specific subdomain. So if a user "pippo" that owns "www.pippo.com" wants to run software HelloWorld from his own subdomains he has to: Register to my site Create his own subdomain on his own site, run.pippo.com From his DNS control panel, he creates a CNAME record "run.pippo.com" pointing to "public.mydomain.com" He types in a browser http://run.pippo.com/HelloWorld When the software(that is physically run on my server) is called, first it checks that the originating domain is a trusted one. I don't do any other kind of check that restricts software execution. From a SEO perspective, I care about Google indexing of www.mydomain.com but I don't care about indexing of public.mydomain.com What are the possible security implications of doing this for my site? Is there a better way to do this or software that already does this that I can use?

    Read the article

  • Making own clothes website [on hold]

    - by Manjushree
    I am BSc student in Mathematics but i would like to create own clothes website. Can anyone help me how can i design the clothes website. I never have any background knowledge about making the webpage online. The clothes website does not have to look professional but simple enough where i can put my clothes to show the items to people or customers. Once I created the clothes website then i can open the business account and starting selling the goods online with that account. Do i need to buy any domains to create the website? Please help me?

    Read the article

  • Multiple sites redirected to one main site

    - by mattgcon
    I have a client who insists of having multiple website domains all being redirected to one main website domain. It is getting out of hand and his server has become conveluted and riddled with garbage because of it, not to mention confusing at times. Each of these domains that he is setting up has no content, they simply redirect the user to the main website domain. Is this practice of having multiple domains pointing to one main website common? And does anyone know where I can get information to give to this client to let him know this is a bad practice if it is a bad practice?

    Read the article

  • Using env variables with RewriteRule and ErrorDocument

    - by misterte
    Hi, I'm having problems with the following while config. my Apache server to Rewrite some urls. SetEnv PATH_TO_DIR /directory RewriteRule ^%{PATH_TO_DIR}/([a-zA-Z0-9_\-]+)/([a-zA-Z0-9_\-\.]+)/?$ /index.php?dir=$1&file=$2 ErrorDocument 404 %{PATH_TO_DIR}/index.php?dir=null&file=error This conf. used to work perfectly fine until I used SetEnv PATH... etc. I need to use this because there are lots of rules, not just those. Can anyone point out my mistake? Apache returns %{PATH_TO_DIR}/index.php?dir=null&file=error when I try anything (www.site.com/foo/bar for instance). Apache returns the ErrorDocument if i just try to fetch the index. I know it's not a problem with the rewrite rules because they work when I remove the PATH_TO_DIR variable and just hard code it. Thanks! A.

    Read the article

  • Please explain some of the features of URL Rewrite module for a newbie

    - by kunjaan
    I am learning to use the IIS Rewrite module and some of the "features" listed in the page is confusing me. It would be great if somebody could explain them to me and give a first hand account of when you would use the feature. Thanks a lot! Rewriting within the content of specific HTML tags Access to server variables and HTTP headers Rewriting of server variables and HTTP request headers What are the "server variables" and when would you redefine or define them? Rewriting of HTTP response headers HtmlEncode function Why would you use an HTMLEncode in the server? Reverse proxy rule template Support for IIS kernel-mode and user-mode output caching Failed Request Tracing support

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >