Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 178/399 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • Where is EasUS coming from?

    - by Malcolm Lawrie
    I have downloaded the Universal USB Installer and Ubuntu 12.04 Desktop as described on your site. I installed it to a 16Gb USB stick including the format option. Now when I try to boot from the stick into Ubuntu I get a couple of lines of script then a screen with EasUS Todo Backup with Backup. Recovery, Clone and Tools options, but no sign of Ubuntu starting. Where is the start Ubuntu option please? I can find no reference to EaseUS on your help pages.

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • How should I track multi-valued page attributes (e.g. tags) using custom variables?

    - by Simon
    Our pages can each have many tags, e.g 'football', 'sms', 'nsfw', etc.. wich we would like to track in google analytics. We're already tracking things like category using google analytics custom variables. We've used three of the five available slots so far. How can we track tags the same way? If we just mush them all together - e.g. 'football, sms, nsfw' then can we track the ones that are tagged 'football'? What's the right way to track multi-valued page attributes using custom variables?

    Read the article

  • Mouse cursor is MASSIVE inside of firefox and chromium

    - by user171396
    While installing ubuntu i accidently hit the high contrast option. I could not figure out how to diable it within the install, so i let it complete. I booted up into ubuntu 13.04 and HC was still on. I disabled it in Universal Access, and now am noticing my mouse curose is huge in web browsers. This is very much a stock install. Is there a setting to disable the HUGE mouse? I mean the thing is 4 times the size of text etc on normal pages. and its only in broswsers from what ive seen so far. EDIT Looks like its in everthing with text.. terminal, app store, folders and files... /sigh.

    Read the article

  • XDIME for Mobile Applications

    - by Carlos Gavidia
    I'm involved in a project that requires to mobile-enable some previously developed Portlets. The Portlets are deployed in WebSphere Portal, and the container offers a technology called IBM Mobile Portal Accelerator that uses XDIME to render mobile pages according to the device. I'm trying to document myself in the technology and I'm having a bad time: Google only shows some outdated sites from IBM and even older posts from Volantis, another company involved in the technology (Amazon shows no related books). So... what's the current status of that technology actually? Is has some decent level of adoption?

    Read the article

  • Change from static HTML file to meta tag for Google Webmaster verification

    - by Wilfred Springer
    I started verifying the server by putting a couple of static HTMLs in place. Then I noticed that Google wants you to keep these files in place. I didn't want to keep the static HTMLs in, so I want to switch to an alternative verification mechanism, and include the meta tags on the home page. Unfortunately, once your site is verified, you never seem to be able to change to an alternative way of verification. I tried removing the HTML pages. No luck whatsoever. Google still considers the site to be 'verified'. Does anybody know how to undo this? All I want to do is switch to the meta tag based method of site ownership verification.

    Read the article

  • Sense of "stop on..." stanza when job is a task

    - by Binarus
    Hi, an upstart question (I think I have read all relevant man pages but could not find the answer there): What is the sense of using a "stop on ..." stanza in the definition of a job which is a task? The manuals tell us that such a job, after being started, just waits until its script (or exec stanza) is executed completely, and then stops automatically. Given that, what is the point in using "stop on ..." stanzas in such job definitions? For example, this is the job definition for Upstart's (very important) rc job in Natty 11.04 (leaving out comments and empty lines): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL IMHO, the job, after being started by a runlevel event, will be stopped automatically as soon as /etc/init.d/rc $RUNLEVEL has finished. Thank you very much for any explanation!

    Read the article

  • Didn't you have problems with upgrade from 11.10 to 12.04 (libre office)?

    - by Pascal Paulus
    This is the first time I'm repporting something hoping that it can be usefull for you. When updating from 11.010 to 12.04 (what include updating Libre office I supose), I can't any more work with any document that was originally made in Libre office. Every change freezes the screen, I can't save anything... I'm talking of complex documents, with lots of internal references and footnotes and some propor text styles of about 230 pages (phd work) I wanted to alert you that probabely something is wrong but as I don't have any tecnical knowledge, I don't know what could be usefull to help you in your great job of making good free software. My little desktop has 2Gb of Ram memory and an atom processor (I can look for more details if that would be usefull to you)

    Read the article

  • How do I add restrictions for users to sign up before they can access web site?

    - by user1867842
    How do I get my webpage not to go back when they hit the back button and are logged out and how can I add a web page to be blocked like FACEBOOK doesn't let you get into their site with out having a page or a account with them, and if you try to put something in the url and try to go to something on their site it gives you a web page that says "you have to be logged in first" . Like I don't want someone going to the url of the "index" page before they have signed up as a member they need to make an account first then they can have access to the "index" page. How do I do this. I have a website so far that has a database and the website has 5 pages so far and two of them which is the login and sign up page which are both used with php and mysql and they work fine. How do I restrict access to the main website by first having the users sign up with me for an account.

    Read the article

  • Sortie officielle de WebMatrix, le nouvel outil de développement Web de Microsoft pour les débutants ou les petites entreprises

    Sortie officielle de WebMatrix Nouvel outil de développement web de Microsoft pour les débutants ou les petites entreprises Mise à jour du 14/01/11 Comme nous l'avions prévu hier (lire ci-dessous) WebMatrix, le nouvel IDE de Microsoft est sorti. WebMatrix est un outil tout-en-un destiné à tous les développeurs mais particulièrement aux étudiants, ou ou aux personnes cherchant un moyen simple et rapide de créer un site Web. Il comprend IIS Express (serveur de développement web), ASP.NET Web Pages (technologie de développement web), et SQL Server Compact (base de données embarquée). « WebMatrix démocratise la plateforme web en...

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • Dual Screens not working nVidia

    - by user91396
    So I'm very much an Ubuntu noob, in fact I just install Ubuntu to my P.C. and I started it up with both my screens plugged into my nVidia's dvi and vga ports and logged in, change the skin to classic gnome, because that's how it was when I last used Ubuntu (8.1), and both screens were working separately. The trouble is that I got a notification saying there was nVidia drivers to be installed, so I install them and restart my P.C., as it told me to, and when I get back on only one of my screens is working and when I go into Displays (All Settings, Displays) it doesn't register my other screen at all, and it calls my working screen "Laptop". I've tried looking through several pages of Google but I see no answer. I did try to find the nvidia-settings to see if that had the answers but sadly I couldn't locate it. Thanks in advance for any help, but please remember, I am very new to Ubuntu.

    Read the article

  • Site inaccessible by some people, fine for others [on hold]

    - by Paul Howell
    A couple of days ago my website www.howellphoto.com (hosted by one.com, wordpress site) started loading really slowly, and I have been unable to access any pages linked from the homepage. Several of my friends have found the same issue, yet many are able to access the site without problem. Live support at one.com have not been all that much help, requesting the ip addresses of a few people who cannot access the site, and saying it could be a firewall issue. Wordpress support (my site was created in prophotoblogs) have been better and have updated all plugins, etc, but can see no issue from their end. My main issue is that even if there was a local fix that I could do on my computer, this would not help wih any potential customers visiting my site for information! This is driving me crazy!!! Any help will be legendary! Cheers, Paul

    Read the article

  • category title and affect on SEO and ranking [closed]

    - by Mark
    We are working on a jobs and skills website (similiar to Skill Pages) and are deciding on the names of categories. Rather than having loads of categories and sub-categories like, for example, Builder, Electrician, Carpenter etc, we would like to have more general and easier on the eye category names. So for example we have House, Computer, Education, Art etc. So a builder would be in category House and a few others. Will this style negatively effect our SEO and ranking? And if so, should we abandon and go back to traditional categories and sub-categories?

    Read the article

  • Accessing Master Page Controls

    - by Bunch
    Sometimes when using Master Pages you need to set a property on a control from the content page. An example might be changing a label’s text to reflect some content (e.g. customer name) being viewed or maybe to change the visibility of a control depending on the rights a user may have in the application. There are different ways to do this but this is the one I like. First on the code behind of the Master Page create the property that needs to be accessed. An example would be: Public Property CustomerNameText() As String     Get         Return lblCustomerName.Text     End Get     Set(ByVal value As String)         lblCustomerName.Text = value     End Set End Property Next in the aspx file of the content page add the MasterType directive like: <%@ MasterType VirtualPath="~/MasterPages/Sales.master" %> Then you can access the property in any of the functions of the code behind of the aspx content page. Master.CustomerNameText = “ABC Store” Technorati Tags: ASP.Net,VB.Net

    Read the article

  • Issue with sitemap in GWT

    - by Anusha
    I have an e-commerce website www.beyondtime.in, i have been constantly monitoring the google bot crawling on my website and my webmaster account. Lately, i have found two issues that i have not been able to understand and hence want your help. 1.) The Google Bots have been only crawling www.beyondtime.in/telecom.php this URL of my website, when the URL is not even valid. So, kindly help me understand what needs to be done to let Google crawl other pages of the website as well. 2.) The second question is about the Google Webmaster account, where i've submitted my sitmap with 227 URLs, but out of that only 156 have been indexed. Also none of the images of my website have been indexed by Google. So kindly help me with this as well. Thanks

    Read the article

  • Serverless Web Application

    - by Andrea Di Persio
    In my company we work on a software that produce reports in html format. My bosses love the fact that static html pages can be moved across computer simply by moving/copying a folder and no web server is involved, so the customer only need a browser. The problem is that they asking me to implement a lot of feature which is very hard to implement properly and in a clean way without an application server. Frames cross domain problem, the impossibility to work with GET and POST data, no URLs routing...is very hard to work with this limitations. Anyone had similiar experience and wants to share their tricks/suggestion ? Do I need to tell my boss 'there is no future without a web server'? Regards.

    Read the article

  • Should I include everything in the sitemap or only new content?

    - by Mee
    For a website with dynamic content (new content is constantly being added), should I only include the newest content in the sitemap or should I include everything (with a sitemap index)? What are the best practices for sitemaps esp. for large sites? Also, is there anyway to make google (and other search engines) only crawl the pages in the sitemap? Thanks Update: Also, any idea how stackoverflow handle this? I'd like to know but unfortunately (also understandingly) they have blocked access to their sitemap.

    Read the article

  • JavaOne India Technical Sessions

    - by Tori Wieldt
    If you’re working with Java technology, it pays to go straight to the source for your information. At JavaOne and Oracle Develop India, you’ll be able to choose from more than 90 sessions, hands-on labs, keynotes, and demos delivered by today’s most knowledgeable Java experts. You'll also hear the most up-to-date information on current releases and future directions of Java standards and technologies, and see the latest Java developer tools and solutions. Register now! Technical sessions include: Project Lambda: To Multicore and Beyond Introduction to JavaFX 2.0 GlassFish REST Administration Back End: An Insider Look at a Real REST Application Java-Powered Home Gateway: Basis of the Next-Generation Smart Home Mobile Java Evolution Cloud-Enabled Java Persistence Visit the JavaOne India web pages for a complete list of conference sessions. See you there!

    Read the article

  • What is required to create local business rich-snippets complete with sitelinks AND breadcrumbs?

    - by Felix
    I have a local business directory site. I would like to markup my business listing 'profile' level pages for display as enhanced listings/rich-snippets complete with business names, addresses and phone numbers. I would also like to display site-links and path-based breadcrumbs to help users navigate site directory hierarchy (which is deep). Is there a limit to the amount of breadcrumbs a site can leave? Is there a separate limit on the number of breadcrumbs which Google/Bing will display in the SERP? What kind of markup language(s) would be needed to best position my site to show site-links AND breadcrumbs? For example: Find a business Browse by Location State City Zip or Find a business Choose Service Browse by location State City Thanks all!

    Read the article

  • Does google see the output of document.write?

    - by merk
    I've got a site where people can list machinery for sale. Each item for sale has it's own dynamic page. On each of these pages we allow the person selling the item to have a link back to their own website. Some people only sell a handful of items and some people are selling dozens or hundreds of items. So in some cases we can have a 100 links back to their external site. Our SEO guy is saying this is bad (i'll open another question on that). So i was wondering if i take the links and spit them out using document.write, will that hide them from google and the other SE's ?

    Read the article

  • Getting through a lengthy book?

    - by Mr_Spock
    This may seen like a weird question, but since we're challenged--as engineers--to constantly adapt to changing technologies, we always find ourselves buried in documentation. That said, we also need to consider that time is of the essence because people want their stuff fixed and improved with little hesitation if any. How do you get through lengthy manuals, books/manuals within a short period of time? Take for example: "The Linux Programming Interface," by Michael Kerrisk, which is roughly 1500 pages in length. How would you get through a monster of a book like this if you're pressed for time while still learning most of the material?

    Read the article

  • Are there Any Concerns with Importing Document Files From a Competing Product?

    - by Thunderforge
    I have a new product that serves the same purpose as my competitor's long-standing product. One thing I have considered doing is allowing my program to import document files created by their product in order to provide an easy way for users to migrate towards mine. Naturally, this would be done without the competitor's permission, as it goes against their interests. I've seen this done before with office suite software (e.g. Open Office and Apple Pages can import MS Word documents), but I'm wondering if there are any concerns, legal or ethical, with me doing this. I fully expect any answers will most likely fall under the "I am not a lawyer" clause, but it would be helpful to have a starting point for anything I would need to be aware of, or if I shouldn't need to worry.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >