Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 177/398 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • Dual Screens not working nVidia

    - by user91396
    So I'm very much an Ubuntu noob, in fact I just install Ubuntu to my P.C. and I started it up with both my screens plugged into my nVidia's dvi and vga ports and logged in, change the skin to classic gnome, because that's how it was when I last used Ubuntu (8.1), and both screens were working separately. The trouble is that I got a notification saying there was nVidia drivers to be installed, so I install them and restart my P.C., as it told me to, and when I get back on only one of my screens is working and when I go into Displays (All Settings, Displays) it doesn't register my other screen at all, and it calls my working screen "Laptop". I've tried looking through several pages of Google but I see no answer. I did try to find the nvidia-settings to see if that had the answers but sadly I couldn't locate it. Thanks in advance for any help, but please remember, I am very new to Ubuntu.

    Read the article

  • wifi works only after connecting through wire

    - by orustam
    I have fresh installed ubuntu 12.04. it is my first ubuntu installation and i'm a bit confused about the network connection. Wifi shows up and connects(at least it shows that the connection is established), but i can't open any pages, i've tried to ping some sites and it fails either. If i try to connect through a wire it works, what is interesting to me is that after i used my wire connection i can use my wifi properly without a wire pluged in. i think it probably has to do with my settings? I tried to find a solution but can figure it out on my own. My Proxy set to none(have applied it system wide) Please help me if you have any clue:)

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • Apress Deal of the day - 5/Feb/2011

    - by TATWORTH
    Today's $10 Deal of the Day from Apress at http://www.apress.com/info/dailydeal is: Pro ASP.NET 4 in C# 2010, Fourth Edition ASP.NET 4 is the latest version of Microsoft's revolutionary ASP.NET technology. It is the principal standard for creating dynamic web pages on the Windows platform. Pro ASP.NET 4 in C# 2010 raises the bar for high-quality, practical advice on learning and deploying Microsoft's dynamic web solution. $59.99 | Published Jun 2010 | Matthew MacDonald I am reviewing this book at the moment but I was already sufficiently impressed by this book to have bought the PDF the day it was available last December.

    Read the article

  • Sense of "stop on..." stanza when job is a task

    - by Binarus
    Hi, an upstart question (I think I have read all relevant man pages but could not find the answer there): What is the sense of using a "stop on ..." stanza in the definition of a job which is a task? The manuals tell us that such a job, after being started, just waits until its script (or exec stanza) is executed completely, and then stops automatically. Given that, what is the point in using "stop on ..." stanzas in such job definitions? For example, this is the job definition for Upstart's (very important) rc job in Natty 11.04 (leaving out comments and empty lines): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL IMHO, the job, after being started by a runlevel event, will be stopped automatically as soon as /etc/init.d/rc $RUNLEVEL has finished. Thank you very much for any explanation!

    Read the article

  • Didn't you have problems with upgrade from 11.10 to 12.04 (libre office)?

    - by Pascal Paulus
    This is the first time I'm repporting something hoping that it can be usefull for you. When updating from 11.010 to 12.04 (what include updating Libre office I supose), I can't any more work with any document that was originally made in Libre office. Every change freezes the screen, I can't save anything... I'm talking of complex documents, with lots of internal references and footnotes and some propor text styles of about 230 pages (phd work) I wanted to alert you that probabely something is wrong but as I don't have any tecnical knowledge, I don't know what could be usefull to help you in your great job of making good free software. My little desktop has 2Gb of Ram memory and an atom processor (I can look for more details if that would be usefull to you)

    Read the article

  • category title and affect on SEO and ranking [closed]

    - by Mark
    We are working on a jobs and skills website (similiar to Skill Pages) and are deciding on the names of categories. Rather than having loads of categories and sub-categories like, for example, Builder, Electrician, Carpenter etc, we would like to have more general and easier on the eye category names. So for example we have House, Computer, Education, Art etc. So a builder would be in category House and a few others. Will this style negatively effect our SEO and ranking? And if so, should we abandon and go back to traditional categories and sub-categories?

    Read the article

  • Change from static HTML file to meta tag for Google Webmaster verification

    - by Wilfred Springer
    I started verifying the server by putting a couple of static HTMLs in place. Then I noticed that Google wants you to keep these files in place. I didn't want to keep the static HTMLs in, so I want to switch to an alternative verification mechanism, and include the meta tags on the home page. Unfortunately, once your site is verified, you never seem to be able to change to an alternative way of verification. I tried removing the HTML pages. No luck whatsoever. Google still considers the site to be 'verified'. Does anybody know how to undo this? All I want to do is switch to the meta tag based method of site ownership verification.

    Read the article

  • Does google see the output of document.write?

    - by merk
    I've got a site where people can list machinery for sale. Each item for sale has it's own dynamic page. On each of these pages we allow the person selling the item to have a link back to their own website. Some people only sell a handful of items and some people are selling dozens or hundreds of items. So in some cases we can have a 100 links back to their external site. Our SEO guy is saying this is bad (i'll open another question on that). So i was wondering if i take the links and spit them out using document.write, will that hide them from google and the other SE's ?

    Read the article

  • Is there a way I can verify my Google Analytics custom report?

    - by SnowboardBruin
    I want to track scrolling on my website since it's a long page (rather than multiple pages). I saw several different methods, with and without an underscore for trackEvent, with and without spaces between commas <script> ... ... ... ga('create', 'UA-45440410-1', 'example.com'); ga('send', 'pageview'); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 100, true]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 75, false]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 50, false]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 25, false]); </script> It takes a day for counts to load with Google Analytics, otherwise I would just tweak and test right now.

    Read the article

  • Should I include everything in the sitemap or only new content?

    - by Mee
    For a website with dynamic content (new content is constantly being added), should I only include the newest content in the sitemap or should I include everything (with a sitemap index)? What are the best practices for sitemaps esp. for large sites? Also, is there anyway to make google (and other search engines) only crawl the pages in the sitemap? Thanks Update: Also, any idea how stackoverflow handle this? I'd like to know but unfortunately (also understandingly) they have blocked access to their sitemap.

    Read the article

  • Serverless Web Application

    - by Andrea Di Persio
    In my company we work on a software that produce reports in html format. My bosses love the fact that static html pages can be moved across computer simply by moving/copying a folder and no web server is involved, so the customer only need a browser. The problem is that they asking me to implement a lot of feature which is very hard to implement properly and in a clean way without an application server. Frames cross domain problem, the impossibility to work with GET and POST data, no URLs routing...is very hard to work with this limitations. Anyone had similiar experience and wants to share their tricks/suggestion ? Do I need to tell my boss 'there is no future without a web server'? Regards.

    Read the article

  • JavaOne India Technical Sessions

    - by Tori Wieldt
    If you’re working with Java technology, it pays to go straight to the source for your information. At JavaOne and Oracle Develop India, you’ll be able to choose from more than 90 sessions, hands-on labs, keynotes, and demos delivered by today’s most knowledgeable Java experts. You'll also hear the most up-to-date information on current releases and future directions of Java standards and technologies, and see the latest Java developer tools and solutions. Register now! Technical sessions include: Project Lambda: To Multicore and Beyond Introduction to JavaFX 2.0 GlassFish REST Administration Back End: An Insider Look at a Real REST Application Java-Powered Home Gateway: Basis of the Next-Generation Smart Home Mobile Java Evolution Cloud-Enabled Java Persistence Visit the JavaOne India web pages for a complete list of conference sessions. See you there!

    Read the article

  • How to enable a Web portal-based enterprise platform on different domains and hosts without customization [on hold]

    - by S.Jalali
    At Coscend, a cloud and communications software product company, we have built a Web portal-based collaboration platform that we like to host on five different Windows- and Linux-based servers in different hosting environments that run Web servers. Each of these Windows and Linux servers has a different host name and domain name (and IP address). Our team would appreciate your guidance on: (1) Is there a way to implement this Web portal-based platform on these Linux and Windows servers without customizing the host name, domain name and IP address for each individual instance? (2) Is there a way to create some variables using JavaScript for host name and domain name and call them from the different implementations? If a reference to the host/domain names occurs on hundreds of our pages, the variables or objects would replace that. (3) This is part of making these JavaScript modules portable and re-usable for different environments and instances. The portal is written in JavaScript that is embedded in HTML5 and padded with CSS3. Other technologies include Flash, Flex, PostgreSQL and MySQL.

    Read the article

  • What is required to create local business rich-snippets complete with sitelinks AND breadcrumbs?

    - by Felix
    I have a local business directory site. I would like to markup my business listing 'profile' level pages for display as enhanced listings/rich-snippets complete with business names, addresses and phone numbers. I would also like to display site-links and path-based breadcrumbs to help users navigate site directory hierarchy (which is deep). Is there a limit to the amount of breadcrumbs a site can leave? Is there a separate limit on the number of breadcrumbs which Google/Bing will display in the SERP? What kind of markup language(s) would be needed to best position my site to show site-links AND breadcrumbs? For example: Find a business Browse by Location State City Zip or Find a business Choose Service Browse by location State City Thanks all!

    Read the article

  • Site inaccessible by some people, fine for others [on hold]

    - by Paul Howell
    A couple of days ago my website www.howellphoto.com (hosted by one.com, wordpress site) started loading really slowly, and I have been unable to access any pages linked from the homepage. Several of my friends have found the same issue, yet many are able to access the site without problem. Live support at one.com have not been all that much help, requesting the ip addresses of a few people who cannot access the site, and saying it could be a firewall issue. Wordpress support (my site was created in prophotoblogs) have been better and have updated all plugins, etc, but can see no issue from their end. My main issue is that even if there was a local fix that I could do on my computer, this would not help wih any potential customers visiting my site for information! This is driving me crazy!!! Any help will be legendary! Cheers, Paul

    Read the article

  • The life saver HttpContext.Current.Items["ParameterName"]

    - by MoezMousavi
    I got stocked passing parameter to one master page for some reasons, seems the page lifecycle and dynamic loading of the master pages has got some issues with defining public properties in the masterpage within my project. It did not set my values and as a result, properties became useless. A collegue just mentioned using HttpContext. have a look what  MSDN saying "Encapsulates all HTTP-specific information about an individual HTTP request" http://msdn.microsoft.com/en-us/library/system.web.httpcontext.aspx HttpContext.Current.Items["ParameterName"] Also, Page.Items could do the same thing. Page.Items, "Gets a list of objects stored in the page context" http://msdn.microsoft.com/en-us/library/system.web.ui.page.items.aspx as your master page and content page are rendered as a single document anyway.

    Read the article

  • Accessing Master Page Controls

    - by Bunch
    Sometimes when using Master Pages you need to set a property on a control from the content page. An example might be changing a label’s text to reflect some content (e.g. customer name) being viewed or maybe to change the visibility of a control depending on the rights a user may have in the application. There are different ways to do this but this is the one I like. First on the code behind of the Master Page create the property that needs to be accessed. An example would be: Public Property CustomerNameText() As String     Get         Return lblCustomerName.Text     End Get     Set(ByVal value As String)         lblCustomerName.Text = value     End Set End Property Next in the aspx file of the content page add the MasterType directive like: <%@ MasterType VirtualPath="~/MasterPages/Sales.master" %> Then you can access the property in any of the functions of the code behind of the aspx content page. Master.CustomerNameText = “ABC Store” Technorati Tags: ASP.Net,VB.Net

    Read the article

  • Collapsible menu and amount of links in a web page

    - by dstonek
    One of my pages contain three levels of a collapsible menu (JS + CSS from mycssmenu.com). There are a dozen first level items displayed to users, each one with various second level items, and finally a lot of third level items, each one containing a related link. This generates a lot of internal links (300+). Because of SEO should I change the way the collapsible menu is displayed to reduce link amount? What do you suggest? I would like to avoid users to have to open a new page just to only see what are third level items and eventually follow one of its links.

    Read the article

  • XDIME for Mobile Applications

    - by Carlos Gavidia
    I'm involved in a project that requires to mobile-enable some previously developed Portlets. The Portlets are deployed in WebSphere Portal, and the container offers a technology called IBM Mobile Portal Accelerator that uses XDIME to render mobile pages according to the device. I'm trying to document myself in the technology and I'm having a bad time: Google only shows some outdated sites from IBM and even older posts from Volantis, another company involved in the technology (Amazon shows no related books). So... what's the current status of that technology actually? Is has some decent level of adoption?

    Read the article

  • Convert Microsoft Word documents (.doc/x) into HTML files

    - by danie7L T
    Does anybody knows of a good application to get it done quickly and efficiently ? I bought Word Cleaner but the results are merely sufficient and I need go over all the generated html files to clean tons of useless injected tags like <strongH</strong<strongell</strong<strongo </strong<emWor</em<emld</em Most of the articles displayed on a website I manage are based on documents written on MS Word by people how has little idea of what are margins for or ordered/unordered lists, foot/end notes etc and I cannot make them use something else. Does anyone has a tip to help me handle those pages more efficiently than going over them to correct and apply my CSS style ? NB: Just for the record, using "Save as HTML DOC" in Word is faaar worst than Word cleaner

    Read the article

  • Getting through a lengthy book?

    - by Mr_Spock
    This may seen like a weird question, but since we're challenged--as engineers--to constantly adapt to changing technologies, we always find ourselves buried in documentation. That said, we also need to consider that time is of the essence because people want their stuff fixed and improved with little hesitation if any. How do you get through lengthy manuals, books/manuals within a short period of time? Take for example: "The Linux Programming Interface," by Michael Kerrisk, which is roughly 1500 pages in length. How would you get through a monster of a book like this if you're pressed for time while still learning most of the material?

    Read the article

  • Chrome window freezes in Ubuntu

    - by Dragon5689
    Sometimes, especially when I open pages that have some kind of multimedia contents, Chrome freezes. It always happens directly after opening a new tab. In contrast to the way Chrome usually has only tabs crashing, the entire windows freezes. If I have multiple separate Chrome windows open, the others keep working. I run Ubuntu 12.04 and Chrome in version 20.0.1132.47 but this has been going on since I last set up my machine around half a year ago. Anyone having the same problems or an idea what could be wrong here?

    Read the article

  • Issue with sitemap in GWT

    - by Anusha
    I have an e-commerce website www.beyondtime.in, i have been constantly monitoring the google bot crawling on my website and my webmaster account. Lately, i have found two issues that i have not been able to understand and hence want your help. 1.) The Google Bots have been only crawling www.beyondtime.in/telecom.php this URL of my website, when the URL is not even valid. So, kindly help me understand what needs to be done to let Google crawl other pages of the website as well. 2.) The second question is about the Google Webmaster account, where i've submitted my sitmap with 227 URLs, but out of that only 156 have been indexed. Also none of the images of my website have been indexed by Google. So kindly help me with this as well. Thanks

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >