Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 265/592 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • IE8 HTTPs Download Issue

    - by Jon Egerton
    I have a problem with a system I develop related to IE8 downloading over SSL (ie on sites using https://...) and is described on this MS kb article: http://support.microsoft.com/kb/323308 We use the HTTPCacheability.NoCache option as the data being downloaded is sensitive, and is downloaded from a secured site. I don't want that data to be cached on any of the proxies etc that the response passes through back to the client. The article describing the issue details a fix to the client side registry changing a BypassSSLNoCacheCheck setting. I don't want to loosen the system security just for IE8, as the system works fine on anything more upto date. Getting all the clients to apply the hotfix is difficult at best, and impossible at worst. We need to support IE8 in the system, at least for now. So: 1: Does the detailed hotfix have any implications for the security at the browser end in IE8 - does it mean the file will be cached? (in a place other than where the user saves the file). 2: Is there some way I can get these files downloadable with a change at the server end that doesn't break the security side of things?

    Read the article

  • How to use rel=canonical with Sitecore aliases?

    - by Mike G
    I have inherited a Sitecore architecture that is a mess from an SEO duplicate content POV. There are multiple aliases that have been created (and indexed by the search engines) for many of the 2nd tier pages of the site. Due to server issues, I am not able to 301 redirect these duped pages, so I would like to use the rel=canonical tag in an attempt to try and get Google/Bing to recognize the correct pages I would like to appear in the index. I have blocked the most extraneous duped pages with a robots.txt file, however, since Google/Bing have already spidered many of the duped pages, I need to keep them accessible to the spiders, BUT removed from the index. The catch is, since the duped pages are aliases (and don't really physically exist in Sitecore that I can find), I am not sure how to go about using rel=canonical - or if I even can in this situation..?

    Read the article

  • Why subdomains of Blogspot/WordPress like sites are treated as different domains or sites?

    - by Thedijje
    As I know, maps.google.com or mail.google.com all comes under the same domain and its all are subdomain. Entire web treats these subdomain as the part of main domain and they have same Alexa rank, PageRank and all. But in another hand, take a look on blogspot.com/wordpress.com/webs.com; these are different sites but blogs or websites under those domains are treated as different sites. Its new URL, all have different PageRank and Alexa rank as well. Tts about millions of subdomains under those few domain, have almost similar IP address, hosting and CMS, still why they are called different domains?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Do Not Track feature of IE10

    - by Pete Herbert Penito
    One of our clients is getting a bit worried about the new "Do Not Track" feature of Internet Explorer 10. Her site is heavily dependent on php sessions (as I imagine many other sites are). This was what she was reading: http://www.bbc.co.uk/news/technology-18288710 I need some clarification, will this affect how sessions (or cookies) work on normal web sites that use the PHP $_SESSION array? Or is it regarding only how advertising works (engadget's article seems to insinuate this)? Can anyone provide a more technical overview (and the ramifications) of PHP-powered websites?

    Read the article

  • navigation menus and SEO

    - by Rodolfo
    I've always have my doubts about navigation menus effect on SEO. You know, the vertical menus on the top that show in every page in the site linking to main sections and subsections. My issue is that if not done dynamically (i.e. after page is loaded or something), from a search engine's point of view it probably looks like a whole bunch of links in the beginning of the page, and links that probably have nothing to do with the page being analyzed, so it's probably not only confusing it, but also giving link 'juice' to the wrong pages or reducing its value. When I've asked SEO people about this, I usually get a "Google is smart, they'll recognize it as a menu and ignore it" response, but I'm not convinced (and the 'Google is smart' argument sounds almost like religion discussion to me). So does it affect SEO negatively or not? Are there any official posts on this topic?

    Read the article

  • SEO on an existing platform

    - by Simon
    I'm given the task to increase user visits and conversions on for a recruitment website. Conversions would be interested job seekers submitting their CV. The manager would first like to increase the organic search results and optimize the website before starting with targeted campaigns. The problem is, they are using a proprietary recruitment software platform which I can barely add changes to. For example, the url's all look like dynamic url's without any semantic meaning and the markup is almost completely build automatically by that platform. I'm also confident that the lack of submitted CV's is due to a bad user experience of the website (no incentives or clear CTA to register) Besides optimizing the static texts and page titles, is there anything I can do? Thanks

    Read the article

  • How to filter traffic coming to particular page from other page?

    - by BishKopt
    I've got page A linking to page B. There are also other pages linking to B. How can I see traffic that is coming to page B be ONLY form page A? I can somehow do it via Behavior flow: Behavior Behavior flow [Right click on anything] Explore traffic through here [Click edit icon] Define a page group [Right click] Group details [Dropdown] Incoming traffic But how do I do it in normal reports? Is there any way to filter out only the traffic coming from particular page?

    Read the article

  • Changing domain name - what are the practical steps involved

    - by Homunculus Reticulli
    I launched a website a couple of years ago, bright eyed and bushy tailed, with dreams of conquering the world. Unfortunately it wasn't to be. Now, that I am a bit older and wiser, I have spent some money on branding and creating more quality content etc, I am rebranding and relaunching the site with a new domain name. Although the traffic on the old site is laughable (i.e. non-existent), there are a few pages of good information on there and I don't want to lose any "juice" those pages may have gained because web crawlers have been seeing it for a few years now. Ok, the upshot of all that is this: I want to change my domain name from xyz.com to abc.com. I am maintaining the same friendly urls I had before, only the domanin name part of the url will change, so that any traffic coming to the old page will be forwarded/redirected? to the new page seamlessly. How do I go about achieving this (i.e. what are the steps I need to carry out, and to minimize any "disruption" to any credibility the existing site has with Googlebot etc? I am running Apache 2.x on a headless Linux (Ubuntu) server.

    Read the article

  • Possible for using a surrogate to buy a .it domain?

    - by Matthew Reinbold
    I'm a US citizen interested in buying an Italian TLD (*.it). However, those domains can only be registered by EU citizens or residents, or businesses with a registrant who is an Italian citizen and resident. Are there companies that provide a 'surrogate' like service? They fulfill the requirements for registration but I can administer the domain properties? What are they and what can I expect to pay for the middleman? Or am I a horrible person for even considering 'circumventing' the intent of the restriction?

    Read the article

  • Google Chrome and Theora

    - by Michas
    I have problem with Google Chrome and Theora video. I have video that plays nice in Opera and Firefox. However, it doesn't play in Google browser and I don't know why. I've made this video in ogg2theora. Test sample: http://wwsi.edu.pl/video/enigma.html Does anyone know how should I encode to Theora that Google Chrome works? P.S. I am not interested in encoding in WebM or h.264, this time. I am not asking what is the best way to publish video on a website. I only do experiments with Ogg Theora. The test site has fallback for h.264 in video tag and WMV in WMP plugin.

    Read the article

  • Creating date based back entries for a blog and its site registration

    - by open_sourse
    So I am showing a blog to a colleague and telling him how the author has been regularly blogging for over ten years now. My colleague tells me that anyone can register a domain name and start entries from say circa 2000. When I argued that the site registration date can easily show that the registration was done recently he put forward two arguments: The author can claim that he moved from an old domain name which was registered many years ago and lapsed. So he took the data and rebuilt it in the new site. The author can buy an expired domain which was on the internet for many years. I am not sure if these ways can work to con someone to believing you have been a blogger for over a decade. But I do not have enough expertise in the topic to refute him. So I thought I would ask the wise community here at StackExchange. Can anyone give me some insight?

    Read the article

  • Google search does not show sub-pages from my website

    - by user5679
    My website appears in Google search, but only the first page. Of course I have sub-pages linked from the first page, but the sub-pages do not show in Google search. Not in Yahoo, not in Bing. What should I do? It has been three years that sub-pages do not show. (I tried searching site:mydomain.com and pressed 'repeat the search with the omitted results included' link) What would you suspect the reason? My website addresses were like xxx.php?yy=zzz etc, etc, so I changed it to /yy/zzz using mod_rewrite. I thought it might be (X)HTML standard violations, so now I changed it. I hope Google will soon have my entire website, but I am a little bit pessimistic. Do you have any thought?

    Read the article

  • Google doesn't index a subdomain. What can be the problem and what can be done?

    - by fudge
    Hi! I have a domain, let's call it example.com, which has a subdomain, games.example.com. I maintain a games forum using phpbbseo which is located at games.example.com/forum. The problem is that the forum is not being crawled. I used Google's webmaster tools and tested that the page is seen by google. P.S. There is a link from games.example.com to games.example.com/forum. What can I do? How can I make google crawl my forum?

    Read the article

  • Subdomain redirect to WWW

    - by manix
    I have the domain example.com and the test.example.com running on apache server. For some reason when I try to visit test.example it is redirected to www.test.example and by consequence a Server not found error is displayed in the browser. Both .htaccess (root and subdomain folder) files are empty. Additional facts I have another subdomain xyz.example.com pointed to public_html/xyz directory with some content inside (index.html with "hello world message") and it works fine if I use xyz.example.com instead of www.xyz.example.com. So, can you help me to point to the right direction in order. I have a vps and I am able to change any file if is required. Below you can find my virtual host configuration. <VirtualHost xx.xxx.xxx:80> ServerName test.example.com ServerAlias www.test.example.com DocumentRoot /home/example/public_html/test ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/test.example.com combined CustomLog /usr/local/apache/domlogs/test.example.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ScriptAlias /cgi-bin/ /home/example/public_html/test/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/example/test.example.com/*.conf" </VirtualHost>

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Using paypal to process credit cards in Sweden through an API [on hold]

    - by Mastikator
    I'm looking for a Paypal API that lets me process credit cards to make payments without being redirected to a paypal site and without enforcing consumers to use their paypal account. And it needs to work in Sweden. The ones I've looked at (dodirectpayment, expresscheckout, paypalpro gateway) and none of them have let me process credit cards in Sweden via an API that doesn't force the user to visit the paypal login site. I have a form on my webpage that the user types their credit card number, ccv2, expiration, name, address, etc. I need an API that works in Sweden that simply processes the request, and it has to be without the step of being redirected into a paypal website. The ones that I have found only worked in a select few countries, is there an international solution? I've already spent over 12 work hours just looking for an API that meets my requirements.

    Read the article

  • Can I use a list of blog ping services for a portal?

    - by Ivanhoe123
    I'm setting up a list of ping services for a portal. It has a blog, forum, articles, restaurants, hotels and many other information, so it is far beyond a blog. I have a list of standard ping services for WP blogs - but I do not know if this should be literally only for blogs. My questions are: Is it recommended to ping blog services from a portal, such as http://blogsearch.google.com/ping/RPC2? Are there any penalties for sites that are not recognized as blogs? Is there some list of ping services for regular websites and not only blogs? Thanks!

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. This is in some cases good, in other cases not acceptable. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is absolutely no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • Usefulness of the Backlinks shown in Webmaster Tools

    - by Ewan Heming
    Is the list of links for a site shown in Google Webmaster Tools a complete list or just a sample? I've noticed that the links in there appear to be all the ones I didn't think would have any real value - either because they were nofollow or from irrelivant sites. The few I did think would be some use have never shown up and there's also some links that are sometimes there and sometimes not (such as my linkedIn profile). Does this mean that the missing links don't/no longer carry any value? It almost appears that the list is there for Google to either inform you about problems (there was a useful list there when someone tried to SPAM my site) or mis-imform you about which link-building strategies work or not (to keep people guessing about what works or not).

    Read the article

  • Multiple Businesses at The Same Physical Address - SEO / Google Places

    - by Howdy_McGee
    I was wondering what kind if there would be any negative effects to have multiple businesses having the exact same physical address on their website. Currently we have five businesses at the exact same address and it shows on their website, so when people google one of the five businesses address their going to get multiple results from multiple website most of which will not be what their looking for. What is a way around this / what can I do about this? Would adding "Suite Numbers" be a solution? A thought occurred that it might be a good idea to create a landing page for users that are looking up a business by it's address via google. The page will bring up multiple businesses since we have a few at the same address but if we have a landing page at the top which then leads to multiple businesses that might solve the multiple address seo problem. Going to keep researching it though. I also know for Google Places (possible bing local and yahoo local) this could also become a problem. I've submitted an inquiry with them but I wanted to know if anybody had a ready-made solution around this so that Google doesn't bunch all these companies together into one. Thanks!

    Read the article

  • What web hosts support multi-domain SSL?

    - by Bryan Hadaway
    For Consideration - Please do not close or refer this question to: How to find web hosting that meets my requirements? The above link does not refer to SSL certificates in any manner. This question has a very specific objective of listing known web hosts that support this new SSL technology. If I'm not mistaken, multi-domain (not wildcard) SSL is a relatively new technology that is not hugely supported or well-known/advertised yet? I'm having a difficult time discovering which web hosts support the technology (again because it's not popular enough yet to advertise on feature lists). Here is what I've discovered so far: Web Hosts That DO NOT Support Multi-domain SSL BlueHost/HostMonster DreamHost Web Hosts That DO Support Multi-domain SSL FireHost HostGator Please note that SUPPORT doesn't necessarily mean they offer the SSL certs themselves and you may need to purchase separately.

    Read the article

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

  • Mod Rewrite not working on my addon domain

    - by Ogugua Belonwu
    have a wordpress website on my main domain For the wordpress website i have this in my .htaccess file # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php </IfModule> # END WordPress I just created an addon domain and wanted to use new rules for it I created a .htaccess file and put it inside the addon folder eg /newaddon In the .htaccess file i have: Options -Indexes <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^readjob/(.*)/(.*)/(.*)/$ readjob.php?id=$1&amp;cat=$2&amp;title=$3 </IfModule> The url stucture i have is this: http://www.website.com/readjob/3/jobs/web-designers-potech-integrated-services/ But it keeps telling me link is broken I dont know what to do, pls i need assistance (pls i just learnt mod rewriting today, so clarity will be highly appreciated) Thanks

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >