Search Results

Search found 7078 results on 284 pages for 'extended range'.

Page 238/284 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Basic syntax for an animation loop?

    - by Moshe
    I know that jQuery, for example, can do animation of sorts. I also know that at the very core of the animation, there must me some sort of loop doing the animation. What is an example of such a loop? A complete answer should ideally answer the following questions: What is a basic syntax for an effective animation recursion that can animate a single property of a particular object at a time? The function should be able to vary its target object and property of the object. What arguments/parameters should it take? What is a good range of reiterating the loop? In milliseconds? (Should this be a parameter/argument to the function?) REMEMBER: The answer is NOT necessarily language specific, but if you are writing in a specific language, please specify which one. Error handling is a plus. {Nothing is more irritating (for our purposes) than an animation that does something strange, like stopping halfway through.} Thanks!

    Read the article

  • How can I filter a Perl DBIx recordset with 2 conditions on the same column?

    - by BrianH
    I'm getting my feet wet in DBIx::Class - loving it so far. One problem I am running into is that I want to query records, filtering out records that aren't in a certain date range. It took me a while to find out how to do a "<=" type of match instead of an equality match: my $start_criteria = ">= $start_date"; my $end_criteria = "<= $end_date"; my $result = $schema->resultset('MyTable')->search( { 'status_date' => \$start_criteria, 'status_date' => \$end_criteria, }); The obvious problem with this is that since the filters are in a hash, I am overwriting the value for "status_date", and am only searching where the status_date <= $end_date. The SQL that gets executed is: SELECT me.* from MyTable me where status_date <= '9999-12-31' I've searched CPAN, Google and SO and haven't been able to figure out how to apply 2 conditions to the same column. All documentation I've been able to find shows how to filter on more than 1 column, but not 2 conditions on the same column. I'm sure I'm missing something obvious - hoping someone here can point it out to me? Thanks in advance! Brian

    Read the article

  • jQuery UI slider coming out looking wierd

    - by jondavidjohn
    http://cl.ly/2W1V3s0G2G3y3D133I3U <--screenshot of rendered slider It acts normally by clicking the inner whitespace of the slider and dragging, values act accordingly, but the handles do not move and fill the area, and the line at the top grows/shrinks with the difference of the two values. Here is the code I am using to initiate the slider. $('.sliderific').each(function(){ alert($(this).attr('id')); $(this).slider({ range: true, min: 0, max: 500, values: [ 75, 300 ], slide: function( event, ui ) { $(this).nextAll('.left:first').text(ui.values[ 0 ]); $(this).nextAll('.right:first').text(ui.values[ 1 ]); } }); }); and here is the DOM it's being applied to... <div class="white notwide"> <div id="price-slider" class="sliderific"></div> <span class="em small gray left center">Min Price</span> <span class="em small gray right center">Max Price </span> </div> EDIT : Also, I have verified I am including the proper css and the jquery theme images are connecting and being loaded.

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Error while uploading file method in Client Object Model Sharepoint 2010

    - by user1481570
    Error while uploading file method in Client Object Model + Sharepoint 2010. Once the file got uploaded. After that though the code compiles with no error I get the error while executing "{"Value does not fall within the expected range."} {System.Collections.Generic.SynchronizedReadOnlyCollection} I have a method which takes care of functionality to upload files /////////////////////////////////////////////////////////////////////////////////////////// public void Upload_Click(string documentPath, byte[] documentStream) { String sharePointSite = "http://cvgwinbasd003:28838/sites/test04"; String documentLibraryUrl = sharePointSite +"/"+ documentPath.Replace('\\','/'); //////////////////////////////////////////////////////////////////// //Get Document List List documentsList = clientContext.Web.Lists.GetByTitle("Doc1"); var fileCreationInformation = new FileCreationInformation(); //Assign to content byte[] i.e. documentStream fileCreationInformation.Content = documentStream; //Allow owerwrite of document fileCreationInformation.Overwrite = true; //Upload URL fileCreationInformation.Url = documentLibraryUrl; Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add( fileCreationInformation); //uploadFile.ListItemAllFields.Update(); clientContext.ExecuteQuery(); } ///////////////////////////////////////////////////////////////////////////////////////////////// In the MVC 3.0 application in the controller I have defined the following method to invoke the upload method. ////////////////////////////////////////////////////////////////////////////////////////////////// public ActionResult ProcessSubmit(IEnumerable<HttpPostedFileBase> attachments) { System.IO.Stream uploadFileStream=null; byte[] uploadFileBytes; int fileLength=0; foreach (HttpPostedFileBase fileUpload in attachments) { uploadFileStream = fileUpload.InputStream; fileLength=fileUpload.ContentLength; } uploadFileBytes= new byte[fileLength]; uploadFileStream.Read(uploadFileBytes, 0, fileLength); using (DocManagementService.DocMgmtClient doc = new DocMgmtClient()) { doc.Upload_Click("Doc1/Doc2/Doc2.1/", uploadFileBytes); } return RedirectToAction("SyncUploadResult"); } ////////////////////////////////////////////////////////////////////////////////////////////////// Please help me to locate the error

    Read the article

  • mysql: Average over multiple columns in one row, ignoring nulls

    - by Sai Emrys
    I have a large table (of sites) with several numeric columns - say a through f. (These are site rankings from different organizations, like alexa, google, quantcast, etc. Each has a different range and format; they're straight dumps from the outside DBs.) For many of the records, one or more of these columns is null, because the outside DB doesn't have data for it. They all cover different subsets of my DB. I want column t to be their weighted average (each of a..f have static weights which I assign), ignoring null values (which can occur in any of them), except being null if they're all null. I would prefer to do this with a simple SQL calculation, rather than doing it in app code or using some huge ugly nested if block to handle every permutation of nulls. (Given that I have an increasing number of columns to average over as I add in more outside DB sources, this would be exponentially more ugly and bug-prone.) I'd use AVG but that's only for group by, and this is w/in one record. The data is semantically nullable, and I don't want to average in some "average" value in place of the nulls; I want to only be counting the columns for which data is there. Is there a good way to do this? Ideally, what I want is something like UPDATE sites SET t = AVG(a*a_weight,b*b_weight,...) where any null values are just ignored and no grouping is happening.

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • Jquery animate bottom problem

    - by andrea
    It's strange. I have an animation that start when i .hover on a DIV. Into this div i have another in position absolute that i want to move up until the mouse stay on main div and go down in your position when i go out of main div. A simple hover effect. I have write down this function that work well apart a strange bug. var inside = $('.inside') inside.hover(function() { $(this).children(".coll_base").stop().animate({ bottom:'+=30px' },"slow") }, function() { $(this).children(".coll_base").stop().animate({ bottom:'-=30px' },"slow"); }); i need +=30 and -=30 because i have to apply this function to a range of div with different bottom. The problem is that if i hover on and out of div the DIV animate right but when go down he decrease the value of bottom every animation. If i go up and down from the main div i see that the animated div go down even beyond the main div. I think that i miss something to resolve this bug. Thanks for any help

    Read the article

  • DSP - Filtering frequencies using DFT

    - by Trap
    I'm trying to implement a DFT-based 8-band equalizer for the sole purpose of learning. To prove that my DFT implementation works I fed an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. This method calculates the real and imaginary parts both N/2 + 1 samples in length. To attenuate a frequency I'm just doing: float atnFactor = 0.6; Re[k] *= atnFactor; Im[k] *= atnFactor; where 'k' is an index in the range 0 to N/2, but what I get after resynthesis is a slighty distorted signal, especially at low frequencies. The input signal sample rate is 44.1 khz and since I just want a 8-band equalizer I'm feeding the DFT 16 samples at a time so I have 8 frequency bins to play with. Can someone show me what I'm doing wrong? I tried to find info on this subject on the internet but couldn't find any. Thanks in advance.

    Read the article

  • PHP IP Validation Help

    - by Zubair1
    Hello, I am using this IP Validation Function that i came across while browsing, it has been working well until today i ran into a problem. For some reason the function won't validate this IP as valid: 203.81.192.26 I'm not too great with regular expressions, so would appreciate any help on what could be wrong. If you have another function, i would appreciate if you could post that for me. |--------------------------------------------| The code for the function is below: |--------------------------------------------| public static function validateIpAddress($ip_addr) { global $errors; $preg = '#^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}' . '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$#'; if(preg_match($preg, $ip_addr)) { //now all the intger values are separated $parts = explode(".", $ip_addr); //now we need to check each part can range from 0-255 foreach($parts as $ip_parts) { if(intval($ip_parts) > 255 || intval($ip_parts) < 0) { $errors[] = "ip address is not valid."; return false; } return true; } return true; } else { $errors[] = "please double check the ip address."; return false; } }

    Read the article

  • iPhone Multithreaded Search

    - by Kulpreet
    I'm sort of new to any sort of multithreading and simply can't seem to get a simple search method working on a background thread properly. Everything seems to be in order with an NSAutoreleasePool and the UI being updated on the main thread. The app doesn't crash and does perform a search in the background but the search results yield the several of the same items several times depending on how fast I type it in. The search works properly without the multithreading (which is commented out), but is very slow because of the large amounts of data I am working with. Here's the code: - (void)filterContentForSearchText:(NSString*)searchText { isSearching = YES; NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; /* Update the filtered array based on the search text and scope. */ //[self.filteredListContent removeAllObjects]; // First clear the filtered array. for (Entry *entry in appDelegate.entries) { NSComparisonResult result = [entry.gurmukhiEntry compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; if (result == NSOrderedSame) { [self.filteredListContent addObject:entry]; } } [self.searchDisplayController.searchResultsTableView performSelectorOnMainThread:(@selector(reloadData)) withObject:nil waitUntilDone:NO]; //[self.searchDisplayController.searchResultsTableView reloadData]; [apool drain]; isSearching = NO; } - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { if (!isSearching) { [self.filteredListContent removeAllObjects]; // First clear the filtered array. [self performSelectorInBackground:(@selector(filterContentForSearchText:)) withObject:searchString]; } //[self filterContentForSearchText:searchString]; return NO; // Return YES to cause the search result table view to be reloaded. }

    Read the article

  • multithreading issue

    - by vbNewbie
    I have written a multithreaded crawler and the process is simply creating threads and having them access a list of urls to crawl. They then access the urls and parse the html content. All this seems to work fine. Now when I need to write to tables in a database is when I experience issues. I have 2 declared arraylists that will contain the content each thread parse. The first arraylist is simply the rss feed links and the other arraylist contains the different posts. I then use a for each loop to iterate one while sequentially incrementing the other and writing to the database. My problem is that each time a new thread accesses one of the lists the content is changed and this affects the iteration. I tried using nested loops but it did not work before and this works fine using a single thread.I hope this makes sense. Here is my code: SyncLock dlock For Each l As String In links finallinks.Add(l) Next End SyncLock SyncLock dlock For Each p As String In posts finalposts.Add(p) Next End SyncLock ... Dim i As Integer = 0 SyncLock dlock For Each rsslink As String In finallinks postlink = finalposts.Item(i) i = i + 1 finallinks and finalposts are the two arraylists. I did not include the rest of the code which shows the threads working but this is the essential part where my error occurs which is basically here postlink = finalposts.Item(i) i = i + 1 ERROR: index was out of range. Must be non-negative and less than the size of the collection Is there an alternative?

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • python thread prob after build

    - by Apache
    hi expert, i'm having task to scan wifi at specific interval and send it to the server, i've it in python and its works fine when i run manually, then build it to package and when run there is no progress at all, i already ask this question before at http://stackoverflow.com/questions/2735410/python-scritp-problem-once-build-and-package-it, then, i re-modify my code as below, then i found that thread is not functioning once i build, #!/usr/bin/env python import subprocess,threading,... configFile = open('/opt/Jemapoh_Wifi/config.txt', 'r') url = configFile.readline().strip() intervalTime = configFile.readline().strip() status = configFile.readline().strip() print "url "+url print "intervalTime "+intervalTime print "Status "+status.strip() def getMacAddress(): proc = subprocess.Popen('ifconfig -a wlan0 | grep HWaddr | sed \'/^.*HWaddr */!d; s///;q\'', shell=True, stdout=subprocess.PIPE, ) macAddress = proc.communicate()[0].strip() return macAddress def getTimestamp(): from time import strftime timeStamp = strftime("%Y-%m-%d %H:%M:%S") return timeStamp def scanWifi(): try: print "Scanning..." proc = subprocess.Popen('iwlist scan 2>/dev/null', shell=True, stdout=subprocess.PIPE, ) stdout_str = proc.communicate()[0] stdout_list=stdout_str.split('\n') essid=[] rssi=[] preQuality=[] for line in stdout_list: line=line.strip() match=re.search('ESSID:"(\S+)"',line) if match: essid.append(match.group(1)) match=re.search('Quality=(\S+)',line) if match: preQuality.append(match.group(1)) for qualityConversion in preQuality: qualityConversion = qualityConversion.split()[0].split('/') temp = str(int(round(float(qualityConversion[0]) / float(qualityConversion[1]) * 100))).rjust(2) rssi.append(temp) dataToPost = '{"userId":"' + getMacAddress() + '","timestamp":"' + getTimestamp() + '","wifi":[' for no in range(len(essid)): dataToPost += '{"ssid":"' + essid[no] + '","rssi":"' + rssi[no] + '"}' if no+1 == len(essid): pass else: dataToPost += ',' dataToPost += ']}' query_args = {"data":dataToPost} request = urllib2.Request(url) request.add_data(urllib.urlencode(query_args)) request.add_header('Content-Type', 'application/x-www-form-urlencoded') print "Waiting for server response..." print urllib2.urlopen(request).read() print "Data Sent @ " + getTimestamp() print "------------------------------------------------------" t = threading.Timer(int(intervalTime), scanWifi).start() except Exception, e: print e t = threading.Timer(int(intervalTime), scanWifi) t.start() once build, its not reaching the thread, do can anyone help, why the thread is not working after build thanks

    Read the article

  • Is there a website to look up common, already written functions?

    - by pinnacler
    I'm sitting here writing a function that I'm positive has been written before, somewhere on earth. It's just too common to have not been attempted, and I'm wondering why I can't just go to a website and search for a function that I can then copy and paste into my project in 2 seconds, instead of wasting my day reinventing the wheel. Sure there are certain libraries you can use, but where do you find these libraries and when they are absent, is there a site like I'm describing? Possibly a wiki of some type that contains free code that anybody can edit and improve? Edit: I can code things fine, I just don't know HOW to do them. So for example, right now, I'm trying to localize a robot/car/point in space. I KNOW there is a way to do it, just based off of range and distance. Triangulation and Trilateration. How to code that is a different story. A site that could have psuedo code, step by step how to do that would be ridiculously helpful. It would also ensure the optimal solution since everybody can edit it. I'm also writing in Matlab, which I hate because it's quirky, adding to my desire for creating a website like I describe.

    Read the article

  • Python - Why use anything other than uuid4() for unique strings?

    - by orokusaki
    I see quit a few implementations of unique string generation for things like uploaded image names, session IDs, et al, and many of them employ the usage of hashes like SHA1, or others. I'm not questioning the legitimacy of using custom methods like this, but rather just the reason. If I want a unique string, I just say this: >>> import uuid >>> uuid.uuid4() 07033084-5cfd-4812-90a4-e4d24ffb6e3d And I'm done with it. I wasn't very trusting before I read up on uuid, so I did this: >>> import uuid >>> s = set() >>> for i in range(5000000): # That's 5 million! >>> s.add(uuid.uuid4()) ... ... >>> len(s) 5000000 Not one repeater (I didn't expect one considering the odds are like 1.108e+50, but it's comforting to see it in action). You could even half the odds by just making your string by combining 2 uuid4()s. So, with that said, why do people spend time on random() and other stuff for unique strings, etc? Is there an important security issue or other regarding uuid?

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • ruby comma operator and step question

    - by ryan_m
    so, i'm trying to learn ruby by doing some project euler questions, and i've run into a couple things i can't explain, and the comma ?operator? is in the middle of both. i haven't been able to find good documentation for this, maybe i'm just not using the google as I should, but good ruby documentation seems a little sparse . . . 1: how do you describe how this is working? the first snippet is the ruby code i don't understand, the second is the code i wrote that does the same thing only after painstakingly tracing the first: #what is this doing? cur, nxt = nxt, cur + nxt #this, apparently, but how to describe the above? nxt = cur + nxt cur = nxt - cur 2: in the following example, how do you describe what the line with 'step' is doing? from what i can gather, the step command works like (range).step(step_size), but this seems to be doing (starting_point).step(ending_point, step_size). Am i right with this assumption? where do i find good doc of this? #/usr/share/doc/ruby1.9.1-examples/examples/sieve.rb # sieve of Eratosthenes max = Integer(ARGV.shift || 100) sieve = [] for i in 2 .. max sieve[i] = i end for i in 2 .. Math.sqrt(max) next unless sieve[i] (i*i).step(max, i) do |j| sieve[j] = nil end end puts sieve.compact.join(", ")

    Read the article

  • Java: Infinite loop using Scanner in.hasNextInt()

    - by Tomek
    Hi everyone, I am using the following code: while (invalidInput) { // ask the user to specify a number to update the times by System.out.print("Specify an integer between 0 and 5: "); if (in.hasNextInt()) { // get the update value updateValue = in.nextInt(); // check to see if it was within range if (updateValue >= 0 && updateValue <= 5) { invalidInput = false; } else { System.out.println("You have not entered a number between 0 and 5. Try again."); } } else { System.out.println("You have entered an invalid input. Try again."); } } However, if I enter a 'w' it will tell me "You have entered invalid input. Try Again." and then it will go into an infinite loop showing the text "Specify an integer between 0 and 5: You have entered an invalid input. Try again." Why is this happening? Isn't the program supposed to wait for the user to input and press enter each time it reaches the statement: if (in.hasNextInt()) Thanks in advance, Tomek

    Read the article

  • Python multiprocessing doesn't play nicely with uuid.uuid4().

    - by yig
    I'm trying to generate a uuid for a filename, and I'm also using the multiprocessing module. Unpleasantly, all of my uuids end up exactly the same. Here is a small example: import multiprocessing import uuid def get_uuid( a ): ## Doesn't help to cycle through a bunch. #for i in xrange(10): uuid.uuid4() ## Doesn't help to reload the module. #reload( uuid ) ## Doesn't help to load it at the last minute. ## (I simultaneously comment out the module-level import). #import uuid ## uuid1() does work, but it differs only in the first 8 characters and includes identifying information about the computer. #return uuid.uuid1() return uuid.uuid4() def main(): pool = multiprocessing.Pool( 20 ) uuids = pool.map( get_uuid, range( 20 ) ) for id in uuids: print id if __name__ == '__main__': main() I peeked into uuid.py's code, and it seems to depending-on-the-platform use some OS-level routines for randomness, so I'm stumped as to a python-level solution (to do something like reload the uuid module or choose a new random seed). I could use uuid.uuid1(), but only 8 digits differ and I think there are derived exclusively from the time, which seems dangerous especially given that I'm multiprocessing (so the code could be executing at exactly the same time). Is there some Wisdom out there about this issue?

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

  • How to make UISlider output nice rounded numbers exponentially?

    - by RickiG
    Hi I am implementing a UISlider a user can manipulate to set a distance. I have never used the CocoaTouch UISlider, but have used other frameworks sliders, usually there is a variable for setting the "step" and other "helper" properties. The documentation for the UISlider deals only with a max and min value, and the output is always a 6 decimal float with a linear relation to the position of the "slider nob". I guess I will have to implement the desired functionality step by step. To the user, the min/max values range from 10 m to 999 Km, I am trying to implement this in an exponential way, that will feel natural to the user. I.e. the user experiences a feeling of control over the values, big or small. Also that the "output" has reasonable values. Values like 10m 200m 2.5km 150 km etc. instead of 1.2342356 m or 108.93837756 km. I would like for the step size to increase by 10m for the first 200m, then maybe by 50m up to 500m, then when passing the 1000 m value, it starts to deal with Kilometers, so then it is step size = 1 km up until 50 km, then maybe 25 km steps etc. Any way I go about this I end up doing a lot of rounding and a lot of calculations wrapped in a forrest of if statements and NSString/Number conversions, each time the user moves the slider just a little. I was hoping someone could lend me a bit of inspiration/math help or make me aware of a more lean approach to solving this problem. My last idea is to populate and array with a 100 string values, then have the slider int value correspond to a string, this is not very flexible, but doable. Thank you in advance for any help given:)

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >