Search Results

Search found 11086 results on 444 pages for 'asynchronous pages'.

Page 212/444 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • How to CURL and avoid timeout death (Twitter Down) [migrated]

    - by David
    Twitter is down right now, and one of my site's home pages relies on getting data from Twitter (relies is the problem - it should be more of an accessory feature, as it just shows follow count from its feed). Here's the code in question: function socials_Twitter_GetFollowerCount($username) { $method = function () use ($username) { return file_get_contents('https://api.twitter.com/1/users/show.json?screen_name='.$username.'&include_entities=true'); }; $json = cache('bmdtwitter', 3600, $method, false); $json = json_decode($json, true); return intval($json['followers_count']); } What is a good way to make it so if Twitter is down (or not responsive for some reasonable amount of time), our site doesn't appear to be down (I think the timeout maybe defaulting to 30-60 seconds or more).

    Read the article

  • Where would you start if you were trying to solve this PDF classification problem?

    - by burtonic
    We are crawling and downloading lots of companies' PDFs and trying to pick out the ones that are Annual Reports. Such reports can be downloaded from most companies' investor-relations pages. The PDFs are scanned and the database is populated with, among other things, the: Title Contents (full text) Page count Word count Orientation First line Using this data we are checking for the obvious phrases such as: Annual report Financial statement Quarterly report Interim report Then recording the frequency of these phrases and others. So far we have around 350,000 PDFs to scan and a training set of 4,000 documents that have been manually classified as either a report or not. We are experimenting with a number of different approaches including Bayesian classifiers and weighting the different factors available. We are building the classifier in Ruby. My question is: if you were thinking about this problem, where would you start?

    Read the article

  • Best practices for launching a new software version

    - by steve
    I rebuilt a web app to replace a version that we have been using for the last 3-4 years. We have a few thousand clients and a few hundred active users per day. The functionality is basically the same. The new version is a little bit faster with a few enhancement features and there are a lot of behind the scenes changes that the clients will never see. The UI is quite different but ultimately much easier to use and navigate. How should I go about having our clients stop using the old system and start using the new one? I am currently putting together a video that will play on the web site as well as within the app. The video will go through the pages and focus on some key changes. I was also thinking about an intro page that will display once the user logs in and explains some of the features.

    Read the article

  • Where is EasUS coming from?

    - by Malcolm Lawrie
    I have downloaded the Universal USB Installer and Ubuntu 12.04 Desktop as described on your site. I installed it to a 16Gb USB stick including the format option. Now when I try to boot from the stick into Ubuntu I get a couple of lines of script then a screen with EasUS Todo Backup with Backup. Recovery, Clone and Tools options, but no sign of Ubuntu starting. Where is the start Ubuntu option please? I can find no reference to EaseUS on your help pages.

    Read the article

  • How to use database to generate multiple folder content page? [migrated]

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • MediaWiki plugin for dynamic content via forms

    - by Geek42
    Are there any plugins for MediaWiki that would allow me to create a page that has a form at the top that when filled in populates tags further down in the document? Say someone would put a form with "Source Server:" and "Destination Server:" fields at the top. Once those were typed in it would automatically populate those names into the content lower so that when following the instructions you could just read the docs and not have to mentally replace things, and possibly mess it all up. I'm not looking to make pages that are permanent, just ones that can have values entered before they are followed. Any suggestions?

    Read the article

  • Getting a lot of '/_' errors from webmaster tools

    - by Vermino
    I'm using a WordPress site and I thought I got all the kinks out of it. For some reason Webmaster Tools is crawling my website and showing a lot of 404 errors which are from /_ like additional pages that I've never created. I just can't figure out what is creating these for Google crawlers and then displaying a 404. My robots.txt is here. My sitemap (created by the Yoast plugin) is here. I have Yoast and Jetpack plugins installed. What could be causing these links to appear

    Read the article

  • Mouse cursor is MASSIVE inside of firefox and chromium

    - by user171396
    While installing ubuntu i accidently hit the high contrast option. I could not figure out how to diable it within the install, so i let it complete. I booted up into ubuntu 13.04 and HC was still on. I disabled it in Universal Access, and now am noticing my mouse curose is huge in web browsers. This is very much a stock install. Is there a setting to disable the HUGE mouse? I mean the thing is 4 times the size of text etc on normal pages. and its only in broswsers from what ive seen so far. EDIT Looks like its in everthing with text.. terminal, app store, folders and files... /sigh.

    Read the article

  • Request of some opinions about a vertical menu style and some suggestions for the site style [on hold]

    - by AndreaNobili
    I am developing a simple mainly static website using WordPress (because maybe in the future I will add some dynamic content) for a company. The new site have to follow the structure of the old site that requires the presence of a vertical main menu in the left column that contains the link to all the statics pages in the site. This is the old site structure: http://www.saranistri.com/ Now I have installed a new WordPress test site (this is only a test site): http://onofri.org/example/ As you can see in the left columns I have put two main menu vetical widgets that implements a possible choise for the maun menù (the top menù upon the header must be eliminated in the final implementation) I want to know some opinions about: 1) Which of the two version is better? Do you have some additional ideas about the CSS style of this vertical menu? 2) What could I do to give a more professional look to this site? (I know that I have to insert a logo into the header) Tnx Andrea

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 2 0 ratings Time: 01:05:06 More in Science & Technology

    Read the article

  • Sortie officielle de WebMatrix, le nouvel outil de développement Web de Microsoft pour les débutants ou les petites entreprises

    Sortie officielle de WebMatrix Nouvel outil de développement web de Microsoft pour les débutants ou les petites entreprises Mise à jour du 14/01/11 Comme nous l'avions prévu hier (lire ci-dessous) WebMatrix, le nouvel IDE de Microsoft est sorti. WebMatrix est un outil tout-en-un destiné à tous les développeurs mais particulièrement aux étudiants, ou ou aux personnes cherchant un moyen simple et rapide de créer un site Web. Il comprend IIS Express (serveur de développement web), ASP.NET Web Pages (technologie de développement web), et SQL Server Compact (base de données embarquée). « WebMatrix démocratise la plateforme web en...

    Read the article

  • wifi works only after connecting through wire

    - by orustam
    I have fresh installed ubuntu 12.04. it is my first ubuntu installation and i'm a bit confused about the network connection. Wifi shows up and connects(at least it shows that the connection is established), but i can't open any pages, i've tried to ping some sites and it fails either. If i try to connect through a wire it works, what is interesting to me is that after i used my wire connection i can use my wifi properly without a wire pluged in. i think it probably has to do with my settings? I tried to find a solution but can figure it out on my own. My Proxy set to none(have applied it system wide) Please help me if you have any clue:)

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • How do I add restrictions for users to sign up before they can access web site?

    - by user1867842
    How do I get my webpage not to go back when they hit the back button and are logged out and how can I add a web page to be blocked like FACEBOOK doesn't let you get into their site with out having a page or a account with them, and if you try to put something in the url and try to go to something on their site it gives you a web page that says "you have to be logged in first" . Like I don't want someone going to the url of the "index" page before they have signed up as a member they need to make an account first then they can have access to the "index" page. How do I do this. I have a website so far that has a database and the website has 5 pages so far and two of them which is the login and sign up page which are both used with php and mysql and they work fine. How do I restrict access to the main website by first having the users sign up with me for an account.

    Read the article

  • How should I track multi-valued page attributes (e.g. tags) using custom variables?

    - by Simon
    Our pages can each have many tags, e.g 'football', 'sms', 'nsfw', etc.. wich we would like to track in google analytics. We're already tracking things like category using google analytics custom variables. We've used three of the five available slots so far. How can we track tags the same way? If we just mush them all together - e.g. 'football, sms, nsfw' then can we track the ones that are tagged 'football'? What's the right way to track multi-valued page attributes using custom variables?

    Read the article

  • Javascript Only Search Method [on hold]

    - by user2118228
    I need to put a search function on a website that is going to be on a CD-ROM with no access to the internet. It has 80 pages, and about 500 'items', so I'd prefer to not have to hard code 100's of 'if statements if possible. I've found a few programs you can buy that will index and generate results (Zoom Search, JSS Index, The German Guys') but there are odd quirks with each one. Plus I would rather code it myself to get complete control over it, and to really understand what it's doing. Basically searching for a few words would display the product image and description; clicking on that would take you the related URL. This is kind of complicated, I can't find an easy solution not dealing with hundreds of if Statements. Has anyone ever created anything like this or know a better method? I'm not really sure a better way to go about this. I've used PHP/MYSQL for search results before, but this cannot run any php.

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • Apress Deal of the day - 5/Feb/2011

    - by TATWORTH
    Today's $10 Deal of the Day from Apress at http://www.apress.com/info/dailydeal is: Pro ASP.NET 4 in C# 2010, Fourth Edition ASP.NET 4 is the latest version of Microsoft's revolutionary ASP.NET technology. It is the principal standard for creating dynamic web pages on the Windows platform. Pro ASP.NET 4 in C# 2010 raises the bar for high-quality, practical advice on learning and deploying Microsoft's dynamic web solution. $59.99 | Published Jun 2010 | Matthew MacDonald I am reviewing this book at the moment but I was already sufficiently impressed by this book to have bought the PDF the day it was available last December.

    Read the article

  • Common light map practices

    - by M. Utku ALTINKAYA
    My scene consists of individual meshes. At the moment each mesh has its associated light map texture, I was able to implement the light mapping using these many small textures. 1) Of course, I want to create an atlas, but how do you split atlases to pages, I mean do you group the lm's of objects that are close to each other, and load light maps on the fly if scene is expected to be big. 2) the 3d authoring software provides automatic uv coordinates for each mesh in the scene, but there are empty areas in the texel space, so if I scale the texture polygons the texel density of each face wil not match other meshes, if I create atlas like that there will be varying lm resolution, how do you solve this, just leave it as it is, or ignore resolution ? Actually these questions also applies to other non tiled maps.

    Read the article

  • Dual Screens not working nVidia

    - by user91396
    So I'm very much an Ubuntu noob, in fact I just install Ubuntu to my P.C. and I started it up with both my screens plugged into my nVidia's dvi and vga ports and logged in, change the skin to classic gnome, because that's how it was when I last used Ubuntu (8.1), and both screens were working separately. The trouble is that I got a notification saying there was nVidia drivers to be installed, so I install them and restart my P.C., as it told me to, and when I get back on only one of my screens is working and when I go into Displays (All Settings, Displays) it doesn't register my other screen at all, and it calls my working screen "Laptop". I've tried looking through several pages of Google but I see no answer. I did try to find the nvidia-settings to see if that had the answers but sadly I couldn't locate it. Thanks in advance for any help, but please remember, I am very new to Ubuntu.

    Read the article

  • Didn't you have problems with upgrade from 11.10 to 12.04 (libre office)?

    - by Pascal Paulus
    This is the first time I'm repporting something hoping that it can be usefull for you. When updating from 11.010 to 12.04 (what include updating Libre office I supose), I can't any more work with any document that was originally made in Libre office. Every change freezes the screen, I can't save anything... I'm talking of complex documents, with lots of internal references and footnotes and some propor text styles of about 230 pages (phd work) I wanted to alert you that probabely something is wrong but as I don't have any tecnical knowledge, I don't know what could be usefull to help you in your great job of making good free software. My little desktop has 2Gb of Ram memory and an atom processor (I can look for more details if that would be usefull to you)

    Read the article

  • Change from static HTML file to meta tag for Google Webmaster verification

    - by Wilfred Springer
    I started verifying the server by putting a couple of static HTMLs in place. Then I noticed that Google wants you to keep these files in place. I didn't want to keep the static HTMLs in, so I want to switch to an alternative verification mechanism, and include the meta tags on the home page. Unfortunately, once your site is verified, you never seem to be able to change to an alternative way of verification. I tried removing the HTML pages. No luck whatsoever. Google still considers the site to be 'verified'. Does anybody know how to undo this? All I want to do is switch to the meta tag based method of site ownership verification.

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >