Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 170/399 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Implementation of instance testing in Java, C++, C#

    - by Jake
    For curiosity purposes as well as understanding what they entail in a program, I'm curious as to how instance testing (instanceof/is/using dynamic_cast in c++) works. I've tried to google it (particularly for java) but the only pages that come up are tutorials on how to use the operator. How do the implementations vary across those langauges? How do they treat classes with identical signatures? Also, it's been drilled into my head that using instance testing is a mark of bad design. Why exactly is this? When is that applicable, instanceof should still be used in methods like .equals() and such right? I was also thinking of this in the context of exception handling, again particularly in Java. When you have mutliple catch statements, how does that work? Is that instance testing or is it just resolved during compilation where each thrown exception would go to?

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • How to check that I have recovered from Penguin 2.0?

    - by Simon Walker
    I have 3 year old website which has been hit by Penguin 2.0 in May. The website traffic dropped almost 30%. I have been working hard from last 2.5 months on the website and my website's traffic recovered in last week of August. In fact, I am receiving more traffic then ever. When I look at the stats, I find my website's search engine visibility has been improved. It is now appearing for more search queries. My website's impressions have also increased. What I am worried about is that my website is nowhere in top 5 pages for keywords having high competition and carrying the highest search volume. They are few in number but important. Should I consider my current situation as recovery or it's just the partial recovery? If it is only partial, then how come traffic is more then it was before penguin 2.0?

    Read the article

  • How does hreflang interact with geo targeting?

    - by zakgottlieb
    If I have multiple subfolders that I wish to target at different countries, I'm thinking the ideal set up would be to specify rel="alternative" hreflang with a language AND country code (e.g. en-AU) and ALSO to geotarget that subfolder to the particular country. That way, the pages would be showing up both in the country-specific results (accessed via Search Tools) because of hreflang, AND the more generic country results from regular searches, because of geotargeting. Is this correct? p.s. What would happen if you geotargeted a subfolder which had e.g. pt-BR hreflang value (i.e. Portuguese-Brazil) to just Portugal?

    Read the article

  • Foolproof way to ensure Google news pulls the correct image for it's thumbnails?

    - by Anthony
    Google news results have an acompanying thumbnail next to articles that show up in the results. If google's crawler can't find a thumbnail to pull from our site, it uses its next best guess from another site, therefore linking the image to another site but still uses our headline. Example: Headline from Reuters, Image from Livemint: Our pages absolutely have images, they are not massive in file-size or dimensions, yet we are not having them pulled / crawled correctly. We have read up on the suggestions from google, and from others around the web and nothing is panning out. Has anyone had any experience where they can ensure google news will pull a thumbnail of our choosing?

    Read the article

  • Extracting main article from webpage/feed. Is it legal/ethical?

    - by Mahdi Ghiasi
    There are some applications like Readability and Pocket, which are letting users to read the main content of web pages, in a clean interface or such. But the articles should be bookmarked from another application, or the web browser. However, I'm creating a news reader app (Zite and Flipboard are popular news reader apps), and I want to create a clean experience for users, so I want to show full content of articles inside my application. Some websites have fulltext feeds, and I'm using it. But about some other websites, which don't have full text feeds: I want to know, is it legal/ethical to use for example Readability API (Or maybe writing my own code for this) to show full text of articles inside my application?

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • Webmaster Tools word count

    - by Henrik Erlandsson
    Is there a way to somehow verify that the googlebot finds the headings and the content, for example by word count? I'm asking this because I tried a program called Screaming Frog, which fails to even fetch the first h1 on a validated page - for about 1/3 of all the pages(!) - and got insecure. Even though the site looks hunky dory in Webmaster Tools, I'd like to know what a googlebot-like content crawler finds on my page and in what order. Any tips on such tools is appreciated. This is not about keyword count.

    Read the article

  • MediaWiki: how to make DISPLAYTITLE be used in categories listings

    - by Konstantin Boyandin
    The problem: a MediaWiki-driven site uses subpages to build pages hierarchy. When I add something like Page1/Page2/Subpage the exactly above string appears in listings and looks clumsy. I can't efficiently use short subpage title (Subpage in this example), since it can appear in different contexts and could confuse users. I can use DISPLAYTITLE magic word, with proper values of $wgRestrictDisplayTitle and $wgAllowDisplayTitle, to reassign page title and make it show on the page. However, when I look into categories listing this page, I will still see "Page1/Page2/Subpage" instead of the title assigned. Is there a simple way (through 'hack' or via relevant extension) to make the new title appear in every listing as well?

    Read the article

  • constructor should not call methods

    - by Stefano Borini
    I described to a colleague why a constructor calling a method is an antipattern. example (in my rusty C++) class C { public : C(int foo); void setFoo(int foo); private: int foo; } C::C(int foo) { setFoo(foo); } void C::setFoo(int foo) { this->foo = foo } I would like to motivate better this fact through your additional contribute. If you have examples, book references, blog pages, or names of principles, they would be very welcome. Edit: I'm talking in general, but we are coding in python.

    Read the article

  • Why is Ubuntu unmounting my primary hard drive?

    - by Twisol
    I'm running Ubuntu 10.10 on my laptop (an Asus G73j), dual-booting Windows 7 if that matters. After using the computer for couple of hours or so, I get a popup complaining that a file was unmounted, then my GNOME desktop panels disappear. I can't save any unsaved work (the file browser shows "Filesystem" as totally empty), and other programs break in odd ways (like Chrome can't browse to any new pages, but keeps current ones going... at least I still have Pandora to listen to when this happens!). I've tried looking in the system logs to no avail; I'm assuming that it can't write any errors to the logs because, of course, the logs are on the primary hard drives. This started happening maybe a few days ago. Yesterday I upgraded from 10.4, but I believe it was happening before then. Any advice for figuring this out?

    Read the article

  • Catalyst 12.1 for AGP card

    - by Brian
    I have a Radeon HD 4650 AGP card. The installation instructions at http://wiki.cchtml.com/index.php/Ubuntu_Oneiric_Installation_Guide download catalyst 12.1 However, there is no download available at http://support.amd.com/us/gpudownload/Pages/index.aspx for my AGP card. Only Windows drivers are available. Should I go ahead and use the guide and manually install 12.1? I assume it's not on the amd website for a reason, but I was wondering if anybody had any experience with this. Thanks!

    Read the article

  • 301 Redirects for regional variants of a homepage

    - by Adam Jenkin
    I am planning on implementing a website which has regional homepage variants. For Example: mycompany.com/europe mycompany.com/us The rest of the site is region agnostic and content will continue such as: mycompany.com/news mycompany.com/about-us etc For homepage (.com) requests, I plan on redirecting users to the correct homepage variant (via 301). If I cannot determine the correct one, I will fallback to redirecting them to the US homepage (/us). From an SEO point of view, firstly is this ok? or should I be doing anything additional to this for making search engines aware of the regional differences? As crawlers are region agnostic, I plan on directing them to the US page with a 301, or should I have something on the .com page which they use? Being that the regional homepage's will likely be the most visited pages, they should show up in result sitelinks when searching for mycompany (which I think is a good thing). Apologies for the slightly open question - I know anything SEO related is more opinion/best practice than fact but am purely looking for advice.

    Read the article

  • How to configure Google Analytics experiments manually

    - by John
    I wish to run multivariate tests on an e-commerce site that run across all product pages. I will be setting and deciding the variations myself all I need to do is track the results in GA. I think may be possible (although only A/B testing is available via the GA UI): https://developers.google.com/analytics/devguides/platform/features/experiments#serving-framework EXTERNAL – You will choose variations, handle experiment optimization, and only report the chosen variation to Google Analytics. For example, this should be used by 3rd-party optimization platforms that want to integrate with Google Analytics for reporting purposes. In this case, the Google Analytics statistical engine will not run. However how do I configure this and push the data to GA in my page?

    Read the article

  • what is itunes result sorted by ?

    - by NemesisII
    This is my App on Itunes: http://itunes.apple.com/us/app/buddy-calculator/id445261163?mt=8 My app Key word is : "calculator scientific calc equation math mathematics unit converter conversion statistic algebra" But when I search "Calculator" on Itunes, my app is not appeared (in first two pages). So I want to ask a question that how to improve the rank of my app , and what is the searched result sorted by (new first, vote or downloaded or ...) ? If I want to improve the rank, how can I do, does it cost fee or something ? Thanks you very much ^^ !

    Read the article

  • Moving large website to new CMS - URL changes

    - by herrherr
    Hi, I was wondering if you have any tipps on the following situation. I'm going to move a large website to a new Content Management System, here are some details on the site: online news magazine with roughly 3,000 articles domain age: 10 years online in the current form since May 2010 indexed pages: ~10.000 percent of search engine traffic: under 10% Unfortunately a custom-tailored CMS was used for the site. The performance, reliability and SEO capabilites have been really bad, so we are moving to a new and proven open source CMS. All the articles will be kept as they are, but the URL structure as well as the structure of the HTML templates will be changed. What I wanted to do now is to actually create 301 redirects for all articles from the old to the new schema, i.e: Old: www.example.com/en/html/news/detail/title-of-the-article/ New: www.example.com/category/title-of-article.html Is this a proven way to do something like this? If not, can you recommend a way that has worked for you? Thanks :)

    Read the article

  • Adwords: Is there a drawback to setting a really high CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • What's the best platform for blogging about coding? [closed]

    - by timday
    I'm toying with starting an occasional blog for posting odd bits of coding related stuff (mainly C++, probably). Are there any platforms which can be recommended as providing exceptionally good support (e.g syntax highlighting) for posting snippets of code ? (Or any to avoid because posting mono-spaced font blocks of text is a pain). Outcome: I accepted Josh K's answer because what I actually ended up doing was realizing I was more interested in articles than a blog style, getting back into LaTeX (after almost 20 years away from it), using the "listings" package for code, and pushing the HTML/PDF results to my ISP's static-hosting pages. (HTML generated using tex4ht). Kudos to the answers mentioning Wordpress, Tumblr and Jekyll; I spent some time looking into all of them.

    Read the article

  • Why does qt-creator need to connect to google-analytics?

    - by Nanda
    I just installed qt-creator to work on non-qt C++ projects. The installed version is 2.5.0 (Based on Qt 4.8.2 32-bit) If I click on any of these pages: I get this error: I realized that /etc/hosts file has the following entry: 127.0.0.1 www.google-analytics.com I don't want to remove the entry from the hosts file because it's always been there along with thousands of other similar adservices/porn/malware addresses. I do not intend to say that qt-creator is looking to create problems in my computer, but I am genuinely interested to know why qt-creator needs to connect to google-analytics? Can this be disabled and qt-creator can still be functional?

    Read the article

  • Will Tracking Subdomains as Single Entity with Google Analytics Help SEO? [closed]

    - by Sam Gridley
    Possible Duplicate: Does Google Analytics data affect SEO? We have two subdomains, one for our blog and one for our ecommerce store. The blog serves to bring traffic and the store is how we monetize the site. We have them designed to appear as one large site, but I know google sees them as two sites. Here is how the subdomains look: www.example.com (store) blog.example.com (blog) I believe I can configure analytics to use subdomain tracking as explained here: http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=55524 But my question is whether this will cause google to see our 2 subdomains as one larger domain for SEO purposes. In other words, is there any relationship to how you configure google analytics and how google indexes and ranks your website(s) and pages? Is there anything I need to do in anaytics or webmaster tools to make google aware that these two subdomains work together as one website? Thanks! Sam

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Constructor should generally not call methods

    - by Stefano Borini
    I described to a colleague why a constructor calling a method can be an antipattern. example (in my rusty C++) class C { public : C(int foo); void setFoo(int foo); private: int foo; } C::C(int foo) { setFoo(foo); } void C::setFoo(int foo) { this->foo = foo } I would like to motivate better this fact through your additional contribute. If you have examples, book references, blog pages, or names of principles, they would be very welcome. Edit: I'm talking in general, but we are coding in python.

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >