Search Results

Search found 50074 results on 2003 pages for 'web servers'.

Page 500/2003 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Data Web Controls

    - by Nani
    Hi In a repeater control can we achive both sorting and grouping together, If possiple plz guide me the way. If not, what is the best control to obtain just sorting and grouping. Thank You

    Read the article

  • SimpleDB direct client access

    - by AlexJReid
    One of the useful things about S3 for content storage is that a client can directly make a direct HTTP request to download the object. For instance, this is how Twitter serve up avatar images. SimpleDB provides an HTTP interface to data. Rather than having to write a proxy that sits inbetween SimpleDB and the client, is it possible for client software (i.e. desktop, mobile) to make calls to read values from a SimpleDB domain, without sharing credentials that shouldn't be shared? Or is a proxy in-between the only way to go?

    Read the article

  • Amazon Product Advertising API: XXX is not a valid value for BrowseNodeId

    - by Chris
    Hello, I am using the Amazon product Advertising API to fetch the product categories. For US categories it is working. But using browse nodes from different sites I get the following error: "569604 is not a valid value for BrowseNodeId. Please change this value and retry your request." I got the browse-nodes from the following site: http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?BrowseNodeIDs.html Where is the problem? Thanks for your help!

    Read the article

  • enabling tomcat web apps to be served from Ubuntu lucid

    - by user558925
    Hi, I have tomcat running on 8080 on a ubuntu lucid server. I am able to access it fro local machine. However I am unable to access the tomcat from any machine outside. Is it due to firewall restrictions. WHat do I need to do to enable accessing tomcat from remote machines. Tried adding this rule to iptables but it did not solve iptables --table nat --append PREROUTING --protocol tcp --destination-port 80 \ --in-interface eth0 --jump REDIRECT --to-port 8080 Any help would be appreciated Thanks Bala Thiruppanambakkam

    Read the article

  • SKOS Vocabulary

    - by n0oB
    Hi all, I am searching for an example of SKOS Vocabulary. Is there anybody who knows a site, a link for a SKOS voc? Thank you in advance!

    Read the article

  • Website not opening in Chrome?

    - by jitendra
    One of my friend's site's users getting this error. Oops! This link appears to be broken in Google Chrome http://www.labnol.org/software/webpages-not-opening-in-google-chrome/13041/ Can he do something with their hosting to ensure users of his site will not get this error.

    Read the article

  • Display image at point using jQuery

    - by Chris
    I have an image with a click event handler that captures the location where you clicked. $("#image").click(function(e) { var x = e.pageX - $(this).offset().left; var y = e.pageY - $(this).offset().top; }); I want it so that when the image is clicked an image appears at that location on top of the image. How do I do this?

    Read the article

  • Get Highest Res Favicon

    - by Jeremy
    I'm making a website that needs to dynamically obtain the favicon of sites upon request. I've found a few api's that can accomplish this fairly well, and so far I'm liking http://www.fvicon.com/. The final image for my website will be 64x64px, and some websites such as Google and Wordpress have nice images of this size that are easily retrieved via this api. Though, of course, most websites only have a 16x16 favicon image and scaling that image to 64x64 has very bad quality loss. Examples: (high res) http://a.fvicon.com/wordpress.com?format=png&width=64&height=64 (low res) http://a.fvicon.com/yahoo.com?format=png&width=64&height=64 Keeping this in mind, I'm planning on somehow determining whether a high-res image is available and, if so, the website will use this image. If not, I want to use a pre-made 64x64 icon with the smaller icon layered over it. What I'm having trouble with is determining if there is a high res favicon available or not. Also, I'm curious if there's a better approach to this situation. I'd rather not use smaller images (64x64 works out really well for this project). The lowest res I'm willing to drop to is 48x48 but even then there will be a significant quality loss for scaling up 16x16 favicons. Any ideas? If you need any more information I will gladly provide it. Thank you!

    Read the article

  • C/PHP: How do I convert the following PHP JSON API script into a C plugin for apache?

    - by TeddyB
    I have a JSON API that I need to provide super fast access to my data through. The JSON API makes a simply query against the database based on the GET parameters provided. I've already optimized my database, so please don't recommend that as an answer. I'm using PHP-APC, which helps PHP by saving the bytecode, BUT - for a JSON API that is being called literally dozens of times per second (as indicated by my logs), I need to reduce the massive RAM consumption PHP is consuming ... as well as rewrite my JSON API in a language that execute much faster than PHP. My code is below. As you can see, is fairly straight forward. <?php define(ALLOWED_HTTP_REFERER, 'example.com'); if ( stristr($_SERVER['HTTP_REFERER'], ALLOWED_HTTP_REFERER) ) { try { $conn_str = DB . ':host=' . DB_HOST . ';dbname=' . DB_NAME; $dbh = new PDO($conn_str, DB_USERNAME, DB_PASSWORD); $params = array(); $sql = 'SELECT homes.home_id, address, city, state, zip FROM homes WHERE homes.display_status = true AND homes.geolat BETWEEN :geolatLowBound AND :geolatHighBound AND homes.geolng BETWEEN :geolngLowBound AND :geolngHighBound'; $params[':geolatLowBound'] = $_GET['geolatLowBound']; $params[':geolatHighBound'] = $_GET['geolatHighBound']; $params[':geolngLowBound'] =$_GET['geolngLowBound']; $params[':geolngHighBound'] = $_GET['geolngHighBound']; if ( isset($_GET['min_price']) && isset($_GET['max_price']) ) { $sql = $sql . ' AND homes.price BETWEEN :min_price AND :max_price '; $params[':min_price'] = $_GET['min_price']; $params[':max_price'] = $_GET['max_price']; } if ( isset($_GET['min_beds']) && isset($_GET['max_beds']) ) { $sql = $sql . ' AND homes.num_of_beds BETWEEN :min_beds AND :max_beds '; $params['min_beds'] = $_GET['min_beds']; $params['max_beds'] = $_GET['max_beds']; } if ( isset($_GET['min_sqft']) && isset($_GET['max_sqft']) ) { $sql = $sql . ' AND homes.sqft BETWEEN :min_sqft AND :max_sqft '; $params['min_sqft'] = $_GET['min_sqft']; $params['max_sqft'] = $_GET['max_sqft']; } $stmt = $dbh->prepare($sql); $stmt->execute($params); $result_set = $stmt->fetchAll(PDO::FETCH_ASSOC); /* output a JSON representation of the home listing data retrieved */ ob_start("ob_gzhandler"); // compress the output header('Content-type: text/javascript'); print "{'homes' : "; array_walk_recursive($result_set, "cleanOutputFromXSS"); print json_encode( $result_set ); print '}'; $dbh = null; } catch (PDOException $e) { die('Unable to retreive home listing information'); } } function cleanOutputFromXSS(&$value) { $value = htmlspecialchars($value, ENT_QUOTES, 'UTF-8'); } ?> How would I begin converting this PHP code over to C, since C is both better on memory management (since you do it yourself) and much, much faster to execute?

    Read the article

  • The Wheel Invention - Beneficial For Learning?

    - by Sarfraz
    Hello, Chris Coyier of css-tricks.com has written a good article titled Regarding Wheel Invention. In a paragraph he says: On the “reinventing” side, you benefit from complete control and learning from the process. And on the very next line he says: On the other side, you benefit from speed, reliability, and familiarity. Also often at odds are time spent and cost. He is right in both statements I think. I really like his first statement. I do actually sometimes re-invent the wheel to learn more and gain complete control over what I am inventing. I wonder why people are so much against that or rather biased. Isn't there the benefit of learning and getting complete control or probably some other benefits too. I would love to see what you have to say about this.

    Read the article

  • Options to efficiently synchronize 1 million files with remote servers?

    - by Zilvinas
    At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering?

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • AJAX form sections - how to pass url of next stage of form

    - by dan727
    Hi, I've got a multi-part form (in a PHP MVC setup) which I have working correctly without javascript enhancement. I'm starting to add the AJAX form handling code which will handle each stage of a form submission, validating/saving data etc, before using AJAX to load the next stage of the form. I'm wondering how best to pass the URL of the next stage to the current form being processed, so that my jQuery form handling code can process the current form, then load the next part via AJAX. The form "action" is different from what the url of the next stage of the form is - what do you think would be good practice here? I was thinking about either appending the url of the next stage to the form action url, via a query string - then just use javascript to extract this url when the form is successfully processed. The other option is via a hidden form element. Not sure what other client side options I have here Any thoughts?

    Read the article

  • How to make technical training session useful and successful for trainee?

    - by metal-gear-solid
    Are these suggestions good to give for a successful training session? Practice time should be always given immediate after technical training? usually after receiving any technical session about any new thing we do routine work. If we don't do practice just after training, later when we do any work related to that training then we feel we need training again. So if we are getting training today and will not use it for some period of time (15 -30 -60 days) then the training is of no use, as it is at the wrong time. I.e. We will forget many things Any other suggestions which i should give? I'm trainee not trainer. What suggestion should i give to trainer/organizer?

    Read the article

  • iPhone -trouble with a loading data from webservice into a tableview

    - by medampudi
    I am using a Window based application and then loading up my initial navigationview based controller. After loading it if the user is not registered/ does not have a credentials present then it takes the user to a login view controller . loginViewController *sampleView = [[loginViewController alloc] initWithNibName:@"loginViewController" bundle:nil]; [self.navigationController presentModalViewController:sampleView animated:YES]; [sampleView release]; then right after that i try to load the table with data that i get from a webservice using asiHTTP .. for this question lets say it takes 3 seconds time to get the data and then deserialize it . now... my question is it works out okey in the later runs as I store the username and password in a seure location... but in the first instance.... i am not able to get the data to laod to the tableview... I have tried a lot of things... 1. Initially the data fetch methods was in a diffrent methods.. so i thought that might be the problem as then moved it the same place as the tbleviewController(navigationController) 2. I event put in the Reload data at the end of the functionality for the data parsing and deserialization... nothing happens. 3. i did not understand the concept of @property and alll.... 4. The screen is black screen with nothing displayed on it for a good 5 seconds in the consecutive launches of the app.... so could we have something like a MBPorgressHUD implemented for the same. could any one please help for these scenarios and guidance as to what paths to take from here...

    Read the article

  • Centre content vertically in a <div>

    - by Ben
    Hi, this is probably very simple to do, but I can't quite get it right. I have a <div> which contains two <a> tags. I would like the <a> tags to be both centred vertically and horizontally. I have set them to centre horizontally by setting the text-align:center; on the div, but can't figure out how to vertically centre them. How would I go about doing this? Thanks.

    Read the article

  • How to host a naked domain on a CDN?

    - by rjw79
    If I have a domain that I wish to serve "naked" eg http://examp.le/, and efficiently with a CDN, what are my options? The issue is that the CDNs I looked at all want you to use a CNAME so that they can do geo ip lookup. CNAMES are not meant to be served at the same level as other records, and this apparently breaks some dns resolvers. You at least need SOA and MX records at the same level for a naked domain. The only solutions are: having A records in your own dns, thus skipping the geo ip, or finding a cdn who will allow delegation of the whole domain so they can do geo ip things for the A record directly. I've tried googling and can't find any Cdn who offers this. Any ideas? I looked closely at Amazon cloudfront and rackspace cloudfiles. I couldn't work it out for those.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >