Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 94/253 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • Why can't I get a TRUE return in this prepared statement?

    - by Cortopasta
    I can't seem to get this to do anything but return false. My best guess is that the prepared statement isn't executing, but I have no idea why. private function check_credentials($plain_username, $md5_password) { global $dbcon; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); $id = $userid->fetch(); Return $id; } *EDIT:*I've even tried hard coding the username and password into the function itself to try and isolate the problem like this: private function check_credentials($plain_username, $md5_password) { global $dbcon; $plain_username = "jim"; $md5_username = "waffles"; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } Still nothing :-/

    Read the article

  • Controlling ASP.NET output cache memory usage

    - by Josh Einstein
    I would like to use output caching with WCF Data Services and although there's nothing specifically built in to support caching, there is an OnStartProcessingRequest method that allows me to hook in and set the cacheability of the request using normal ASP.NET mechanisms. But I am worried about the worker process getting recycled due to excessive memory consumption if large responses are cached. Is there a way to specify an upper limit for the ASP.NET output cache so that if this limit is exceeded, items in the cache will be discarded? I've seen the caching configuration settings but I get the impression from the documentation that this is for explicit caching via the Cache object since there is a separate outputCacheSettings which has no memory-related attributes. Here's a code snippet from Scott Hanselman's post that shows how I'm setting the cacheability of the request. protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); //Cache for a minute based on querystring HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; }

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • PHP readfile() and large downloads

    - by Nirmal
    While setting up an online file management system, and now I have hit a block. I am trying to push the file to the client using this modified version of readfile: function readfile_chunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); // how many bytes per chunk $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); $handle = fopen($filename, 'rb'); if ($handle === false) { return false; } while (!feof($handle)) { $buffer = fread($handle, $chunksize); echo $buffer; ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } But when I try to download a 13 MB file, it's just breaking at 4 MB. What would be the issue here? It's definitely not the time limit of any kind because I am working on a local network and speed is not an issue. The memory limit in PHP is set to 300 MB. Thank you for any help.

    Read the article

  • Destroying nested resources in restful way

    - by Alex
    I'm looking for help destroying a nested resource in Merb. My current method seems near correct, but the controller raise an InternalServerError during the destruction of the nested object. Here comes all the details concerning the request, don't hesitate to ask for more :) Thanks, Alex I'm trying to destroy a nested resources using the following route in router.resources :events, Orga::Events do |event| event.resources :locations, Orga::Locations end Which gives in jQuery request (delete_ method is a implementation of $.ajax with "DELETE"): $.delete_("/events/123/locations/456"); In the Location controller side, I've got: def delete(id) @location = Location.get(id) raise NotFound unless @location if @location.destroy redirect url(:orga_locations) else raise InternalServerError end end And the log: merb : worker (port 4000) ~ Routed to: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} merb : worker (port 4000) ~ Params: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} ~ (0.000025) SELECT `id`, `class_type`, `name`, `prefix`, `type`, `capacity`, `handicap`, `export_name` FROM `entities` WHERE (`class_type` IN ('Location') AND `id` = 456) ORDER BY `id` LIMIT 1 ~ (0.000014) SELECT `id`, `streetname`, `phone`, `lat`, `lng`, `country_region_city_id`, `location_id`, `organisation_id` FROM `country_region_city_addresses` WHERE `location_id` = 456 ORDER BY `id` LIMIT 1 merb : worker (port 4000) ~ Merb::ControllerExceptions::InternalServerError - (Merb::ControllerExceptions::InternalServerError)

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • how to atomically claim a row or resource using UPDATE in mysql

    - by Igor
    i have a table of resources (lets say cars) which i want to claim atomically. if there's a limit of one resource per one user, i can do the following trick: UPDATE cars SET user = 'bob' WHERE user IS NULL LIMIT 1 SELECT * FROM cars WHERE user IS bob that way, i claim the resource atomically and then i can see which row i just claimed. this doesn't work when 'bob' can claim multiple cars. i realize i can get a list of cars already claimed by bob, claim another one, and then SELECT again to see what's changed, but that feels hackish. What I'm wondering is, is there some way to see which rows i just updated with my last UPDATE? failing that, is there some other trick to atomically claiming a row? i really want to avoid using SERIALIZABLE isolation level. If I do something like this: 1 SELECT id FROM cars WHERE user IS NULL 2 <here, my PHP or whatever picks a car id> 3 UPDATE cars SET user = 'bob' WHERE id = <the one i picked> would REPEATABLE READ be sufficient here? in other words, could i be guaranteed that some other transactions won't claim the row my software has picked during step 2?

    Read the article

  • Printing Arrays from Structs

    - by Carlll
    I've been stumped for a few hours on an exercise where I must use functions to build up an array inside a struct and print it. In my current program, it compiles but crashes upon running. #define LIM 10 typedef char letters[LIM]; typedef struct { int counter; letters words[LIM]; } foo; int main(int argc, char **argv){ foo apara; structtest(apara, LIM); print_struct(apara); } int structtest(foo *p, int limit){ p->counter = 0; int i =0; for(i; i< limit ;i++){ strcpy(p->words[p->counter], "x"); //only filling arrays with 'x' as an example p->counter ++; } return; I do believe it's due to my incorrect usage/combination of pointers. I've tried adjusting them, but either an 'incompatible types' error is produced, or the array is seemingly blank } void print_struct(foo p){ printf(p.words); } I haven't made it successfully up to the print_struct stage, but I'm unsure whether p.words is the correct item to be calling. In the output, I would expect the function to return an array of x's. I apologize in advance if I've made some sort of grievous "I should already know this" C mistake. Thanks for your help.

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • compact Number formatting behavior in Java (automatically switch between decimal and scientific notation)

    - by kostmo
    I am looking for a way to format a floating point number dynamically in either standard decimal format or scientific notation, depending on the value of the number. For moderate magnitudes, the number should be formatted as a decimal with trailing zeros suppressed. If the floating point number is equal to an integral value, the decimal point should also be suppressed. For extreme magnitudes (very small or very large), the number should be expressed in scientific notation. Alternately stated, if the number of characters in the expression as standard decimal notation exceeds a certain threshold, switch to scientific notation. I should have control over the maximum number of digits of precision, but I don't want trailing zeros appended to express the minimum precision; all trailing zeros should be suppressed. Basically, it should optimize for compactness and readability. 2.80000 - 2.8 765.000000 - 765 0.0073943162953 - 0.00739432 (limit digits of precision—to 6 in this case) 0.0000073943162953 - 7.39432E-6 (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case) 7394316295300000 - 7.39432E+6 (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10) 0.0000073900000000 - 7.39E-6 (strip trailing zeros from significand in scientific notation) 0.000007299998344 - 7.3E-6 (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped) Here's what I've found so far: The .toString() method of the Number class does most of what I want, except it doesn't upconvert to integer representation when possible, and it will not express large integral magnitudes in scientific notation. Also, I'm not sure how to adjust the precision. The "%G" format string to the String.format(...) function allows me to express numbers in scientific notation with adjustable precision, but does not strip trailing zeros. I'm wondering if there's already some library function out there that meets these criteria. I guess the only stumbling block for writing this myself is having to strip the trailing zeros from the significand in scientific notation produced by %G.

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Building an extension framework for a Rails app

    - by obvio171
    I'm starting research on what I'd need in order to build a user-level plugin system (like Wordpress plugins) for a Rails app, so I'd appreciate some general pointers/advice. By user-level plugin I mean a package a user can extract into a folder and have it show up on an admin interface, allowing them to add some extra configuration and then activate it. What is the best way to go about doing this? Is there any other opensource project that does this already? What does Rails itself already offer for programmer-level plugins that could be leveraged? Any Rails plugins that could help me with this? A plugin would have to be able to: run its own migrations (with this? it's undocumented) have access to my models (plugins already do) have entry points for adding content to views (can be done with content_for and yield) replace entire views or partials (how?) provide its own admin and user-facing views (how?) create its own routes (or maybe just announce its presence and let me create the routes for it, to avoid plugins stepping on each other's toes) Anything else I'm missing? Also, is there a way to limit which tables/actions the plugin has access to concerning migrations and models, and also limit their access to routes (maybe letting them include, but not remove routes)? P.S.: I'll try to keep this updated, compiling stuff I figure out and relevant answers so as to have a sort of guide for others.

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • mySQL ORDER BY ASC works, DESC does not...

    - by Ben
    I've "integrated" SMF into Wordpress by querying the forum for a list of the most recent videos from a specific forum board and displaying them on the Wordpress homepage. However when I add the ORDER BY clause, the query (which I tested on other parts of the same page successfully), breaks. To add to the mix, I am using the Auto Embed plugin to allow the videos to be played on the homepage, as well as using a jCarousel feature to rotate them. People were kind enough to help me here last time with the regexp to filter the video urls, I'm hoping for the same luck this time! Here is the entire function (remove the DESC and it works...): function SMF_getRecentVids($limit=10){ global $smf_settingsphp_d; if(file_exists($smf_settingsphp_d)) include($smf_settingsphp_d); include "AutoEmbed-1.4/AutoEmbed.class.php"; $AE = new AutoEmbed(); $connect = new wpdb($db_user,$db_passwd,$db_name,$db_server); $connect->query("SET NAMES 'UTF8'"); $sql = SELECT m.subject, m.ID_MSG, m.body, m.ID_TOPIC, m.ID_BOARD, t.ID_FIRST_MSG FROM {$db_prefix}messages AS m LEFT JOIN {$db_prefix}topics AS t ON (m.ID_TOPIC = t.ID_TOPIC) WHERE (m.ID_BOARD = 8) ORDER BY t.ID_FIRST_MSG DESC"; $vids = $connect->get_results($sql); $c = 0; $content = "imageCarousel_itemList = ["; foreach ($vids as $vid) { if ($c > $limit) continue; //extract video code from body $input = $vid->body; $regexp = "/\b(?:(?:https?|ftp):\/\/|www\.)[-a-z0-9+&@#\/%?=~_|!:,.;]*[-a-z0-9+&@#\/%=~_|]/i"; if(preg_match_all($regexp, $input, $matches)) { $AE->parseUrl($matches[0][0]); $imageURL = $AE->getImageURL(); $AE->setWidth(290); $AE->setHeight(240); $content .= "{url: '".$AE->getEmbedCode()."', title: '".$vid->subject."', caption: '', description: ''},"; } $c++; } $content .= "]"; echo $content; $wpdb = new wpdb(DB_USER, DB_PASSWORD, DB_NAME, DB_HOST); }

    Read the article

  • belongs_to with multiple models

    - by julie p
    Hi there! I am a Rails noob and have a question. I have a feed aggregator that is organized by this general concept: Feed Category (books, electronics, etc) Feed Site Section (home page, books page, etc) Feed (the feed itself) Feed Entry So: class Category < ActiveRecord::Base has_many :feeds has_many :feed_entries, :through => :feeds, :limit => 5 validates_presence_of :name attr_accessible :name, :id end class Section < ActiveRecord::Base has_many :feeds has_many :feed_entries, :through => :feeds, :limit => 5 attr_accessible :name, :id end class Feed < ActiveRecord::Base belongs_to :categories belongs_to :sections has_many :feed_entries validates_presence_of :name, :feed_url attr_accessible :name, :feed_url, :category_id, :section_id end class FeedEntry < ActiveRecord::Base belongs_to :feed belongs_to :category belongs_to :section validates_presence_of :title, :url end Make sense? Now, in my index page, I want to basically say... If you are in the Category Books, on the Home Page Section, give me the feed entries grouped by Feed... In my controller: def index @section = Section.find_by_name("Home Page") @books = Category.find_by_name("Books") end In my view: <%= render :partial => 'feed_list', :locals => {:feed_group => @books.feeds} -%> This partial will spit out the markup for each feed entry in the @books collection of Feeds. Now what I need to do is somehow combine the @books with the @section... I tried this: <%= render :partial => 'feed_list', :locals => {:feed_group => @books.feeds(:section_id => @section.id)} -%> But it isn't limiting by the section ID. I've confirmed the section ID by using the same code in the console... Make sense? Any advice? Thanks!

    Read the article

  • [iphone,twitter] Accessing the Twitter API through a proxy using NSURLConnectionsm, OAuth problem

    - by akaii
    I'm having no problems with sending an update directly via hxxps://api.twitter.com/, but the app (for the Iphone, I'm using NSURLConnections) I'm working is supposed to allow the user to select a preferred proxy (e.g. hxxps://twitter-proxy.appspot.com/api/ or hxxps://nest.onedd.net/api/), and I keep getting a 401 error (Failed to validate oauth signature and token) whenever I try to get an access token via these proxies. Even though I send my POST request to the proxy, I am still using the direct url for the api (https:// api.twitter.com/[rest api path]) in the base string. Despite the 401 error message above, the status code I'm actually getting from connection:didReceiveResponse: is 200, probably because it was able to successfully contact the proxy... Is there anything else that I need to consider when using a proxy to access the API? Should anything in the authorization header change, for example? Or the base string? I can manage to connect via Basic Auth without issue, but support for that will be dropped in a month. On a somewhat unrelated note... What are the possible causes of Twitter's error 403, and how do you distinguish between them? Is the only way to differentiate an error due to exceeding the status update limit for an hour (150 per hour) vs for a day (1000 per day) by checking the string reply returned in the response? Is there any way for me to simulate a status update limit error without going through the motions of actually sending 150/1000 tweets?

    Read the article

  • How to get TinyMCE and Jquery validate to work together?

    - by chobo2
    Hi I am using jquery validate and the jquery version of tinymce. I found this piece of code that makes tinymce to validate itself every time something changes in it. Hi I am using the jquery validate with my jquery tinymce so I have this in my code // update validation status on change onchange_callback: function (editor) { tinyMCE.triggerSave(); $("#" + editor.id).valid(); }, This works however there is one problem. If a user copies something from word it brings all that junk styling with it what is usually over 50,000 characters. This is way over my amount of characters a user is allowed to type in. So my jquery validation method goes off telling me that they went over the limit. In the mean time though tinymce has cleaned up that mess and it could be possible now the user has not gone over the limit. Yet the message is still there. So is there a better function call I can put this in? Maybe tell tinymce to delay the valid when a paste is happening, or maybe a different callback? Anyone got any ideas?

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • System.Math.Round bug?

    - by Jeevan
    Hi All, I was writing a function for rounding a number to two places. And I found a bug when I was trying to round specific values. So, I ran the code: class Program { static void Main(string[] args) { int limit = 100; for (int number = 0; number <= limit; number++) { Console.WriteLine((System.Math.Round((double)(number+0.995),2,MidpointRounding.AwayFromZero))); } } } And I found that: 8.99 9.99 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 32.99 33.99 34.99 35.99 36.99 37.99 38.99 39.99 numbers are not rounded to their next value. When I run the same code till 1500: I get the numbers: 8.99 9.99 32.99 33.99 34.99 35.99 36.99 37.99 38.99 39.99 1024.99 1025.99 1026.99 1027.99 1028.99 1029.99 1030.99 1031.99 1032.99 1033.99 1034.99 1035.99 1036.99 1037.99 1038.99 1039.99 1040.99 1041.99 1042.99 1043.99 1044.99 1045.99 1046.99 1047.99 1048.99 1049.99 1050.99 1051.99 1052.99 1053.99 1054.99 1055.99 1056.99 1057.99 1058.99 1059.99 1060.99 1061.99 1062.99 1063.99 1064.99 1065.99 1066.99 1067.99 1068.99 1069.99 1070.99 1071.99 1072.99 1073.99 1074.99 1075.99 1076.99 1077.99 1078.99 1079.99 1080.99 1081.99 1082.99 1083.99 1084.99 1085.99 1086.99 1087.99 1088.99 1089.99 1090.99 1091.99 1092.99 1093.99 1094.99 1095.99 1096.99 1097.99 1098.99 1099.99 1100.99 1101.99 1102.99 1103.99 1104.99 1105.99 1106.99 1107.99 1108.99 1109.99 1110.99 1111.99 1112.99 1113.99 1114.99 1115.99 1116.99 1117.99 1118.99 1119.99 1120.99 1121.99 1122.99 1123.99 1124.99 1125.99 1126.99 1127.99 1128.99 1129.99 1130.99 1131.99 1132.99 1133.99 1134.99 1135.99 1136.99 1137.99 1138.99 1139.99 1140.99 1141.99 1142.99 1143.99 1144.99 1145.99 1146.99 1147.99 1148.99 1149.99 1150.99 1151.99 1152.99 1153.99 1154.99 1155.99 1156.99 1157.99 1158.99 1159.99 1160.99 1161.99 1162.99 1163.99 1164.99 1165.99 1166.99 1167.99 1168.99 1169.99 1170.99 1171.99 1172.99 1173.99 1174.99 1175.99 1176.99 1177.99 1178.99 1179.99 1180.99 1181.99 1182.99 1183.99 1184.99 1185.99 1186.99 1187.99 1188.99 1189.99 1190.99 1191.99 1192.99 1193.99 1194.99 1195.99 1196.99 1197.99 1198.99 1199.99 1200.99 1201.99 1202.99 1203.99 1204.99 1205.99 1206.99 1207.99 1208.99 1209.99 1210.99 1211.99 1212.99 1213.99 1214.99 1215.99 1216.99 1217.99 1218.99 1219.99 1220.99 1221.99 1222.99 1223.99 1224.99 1225.99 1226.99 1227.99 1228.99 1229.99 1230.99 1231.99 1232.99 1233.99 1234.99 1235.99 1236.99 1237.99 1238.99 1239.99 1240.99 1241.99 1242.99 1243.99 1244.99 1245.99 1246.99 1247.99 1248.99 1249.99 1250.99 1251.99 1252.99 1253.99 1254.99 1255.99 1256.99 1257.99 1258.99 1259.99 1260.99 1261.99 1262.99 1263.99 1264.99 1265.99 1266.99 1267.99 1268.99 1269.99 1270.99 1271.99 1272.99 1273.99 1274.99 1275.99 1276.99 1277.99 1278.99 1279.99 1280.99 1281.99 1282.99 1283.99 1284.99 1285.99 1286.99 1287.99 1288.99 1289.99 1290.99 1291.99 1292.99 1293.99 1294.99 1295.99 1296.99 1297.99 1298.99 1299.99 1300.99 1301.99 1302.99 1303.99 1304.99 1305.99 1306.99 1307.99 1308.99 1309.99 which are not rounded to next number! Has anyone any idea about why its happening for these specific numbers!

    Read the article

  • Find three numbers appeared only once

    - by shilk
    In a sequence of length n, where n=2k+3, that is there are k unique numbers appeared twice and three numbers appeared only once. The question is: how to find the three unique numbers that appeared only once? for example, in sequence 1 1 2 6 3 6 5 7 7 the three unique numbers are 2 3 5. Note: 3<=n<1e6 and the number will range from 1 to 2e9 Memory limits: 1000KB , this implies that we can't store the whole sequence. Method I have tried(Memory limit exceed): I initialize a tree, and when read in one number I try to remove it from the tree, if the remove returns false(not found), I add it to the tree. Finally, the tree has the three numbers. It works, but is Memory limit exceed. I know how to find one or two such number(s) using bit manipulation. So I wonder if we can find three using the same method(or some method similar)? Method to find one/two number(s) appeared only once: If there is one number appeared only once, we can apply XOR to the sequence to find it. If there are two, we can first apply XOR to the sequence, then separate the sequence into 2 parts by one bit of the result that is 1, and again apply XOR to the 2 parts, and we will find the answer.

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • Apache RewriteRule: it is possible to 'detect' the first and second path segment?

    - by DaNieL
    Im really really a newbie in regexp and I can’t figure out how to do that. My goal is to have the RewriteRule to 'slice' the request URL path in 3 parts: example.com/foo #should return: index.php?a=foo&b=&c= example.com/foo/bar #should return: index.php?a=foo&b=bar&c= example.com/foo/bar/baz #should return: index.php?a=foo&b=bar&c=baz example.com/foo/bar/baz/bee #should return: index.php?a=foo&b=bar&c=baz/bee example.com/foo/bar/baz/bee/apple #should return: index.php?a=foo&b=bar&c=baz/bee/apple example.com/foo/bar/baz/bee/apple/and/whatever/else/no/limit/in/those/extra/parameters #should return: index.php?a=foo&b=bar&c=baz/bee/apple/and/whatever/else/no/limit/in/those/extra/parameters In short, the first segment in the URL path (foo) should be given to a, the second segment (bar) to b, and the rest of the string in c I wroted this one <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(([a-z0-9/]))?(([a-z0-9/]+))?(([a-z0-9]+))(.*)$ index.php?a=$1&b=$2&c=$3 [L,QSA] </IfModule> But obviously doesn’t work, and I don’t even know if what I want is possible. Any suggestion? EDIT: After playing with coach manager, I got this one working too: RewriteRule ^([^/]*)?/?([^/]*)?/?(.*)?$ index.php?a=$1&b=$2&c=$3 [L,QSA]

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >