Search Results

Search found 7538 results on 302 pages for 'parallel processing'.

Page 249/302 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • I get an Exception when trying to implement reset password with Authlogic

    - by Tam
    Hi, I'm using Ruby on Rails 2.3.5 and Ruby 1.9 and implmeneted Authlogic as my authentication engine. Authlogic works fine. But now I have been trying to implement password reset using the following tutorial: http://www.binarylogic.com/2008/11/16/tutorial-reset-passwords-with-authlogic/ But I'm getting the following exception when I type mysite.com/password_resets/new Processing PasswordResetsController#new (for 127.0.0.1 at 2010-03-13 01:09:45) [GET] Completed in 9ms (DB: 0) | 200 [http://localhost/password_resets/new] [2010-03-13 01:09:45] ERROR TypeError: can't convert Array into String /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/string/output_safety.rb:34:in `concat' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/string/output_safety.rb:34:in `concat_with_safety' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/webrick.rb:63:in `block in service' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/response.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/response.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:16:in `send' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:16:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:22:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/webrick.rb:62:in `service' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/httpserver.rb:111:in `service' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/httpserver.rb:70:in `run' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/server.rb:183:in `block in start_thread' Honestly I'm not sure where to start debugging for this. Is it because I'm using Ruby 1.9? some people seem to be having trouble with Authlogic and Ruby 1.9: http://isitruby19.com/authlogic Please advise me how to go about solving/debugging this issues? Thanks, Tam

    Read the article

  • Get a JQuery function to execute after and independently of onLoad

    - by lewicki
    Is is possible to execute a JQuery function by injecting it into the page? IT CAN'T be attached to the .ready function. The reason is that the user will be uploading an image via iframe. I need JCrop to execute after I display the uploaded image. echo "<script>$('#pictures_step1_right_bottom1', window.parent.document).html('<img id=\"cropbox\" src=\"../../box/" . 'THM' . $filenameAlt . '.jpg' . "\">');</script>"; echo "<script>jQuery(function(){jQuery('#cropbox').Jcrop();});</script>"; This is executed in the iframe that is processing the image. it then sends it to the main page. This does not work however. EDIT: Ended up running a timer: function checkPic() { if($('#cropbox1').length != 0) { jQuery(function() { jQuery('#cropbox1').Jcrop(); }); } timer(); // Check size again after 1 second } function timer(){ var myTimer = setTimeout('checkPic()', 3000); // Check size every 1 second}

    Read the article

  • Disabling redraw in WinForms app

    - by Ryan
    I'm working on a C#.Net application which has a somewhat annoying bug in it. The main window has a number of tabs, each of which has a grid on it. When switching from one tab to another, or selecting a different row in a grid, it does some background processing, and during this the menu flickers as it's redrawn (File, Help, etc menu items as well as window icon and title). I tried disabling the redraw on the window while switching tabs/rows (WM_SETREDRAW message) at first. In one case, it works perfectly. In the other, it solves the immediate bug (title/menu flicker), but between disabling the redraw and enabling it again, the window is "transparent" to mouse clicks - there's a small window (<1 sec) in which I can click and it will, say, highlight an icon on my desktop, as if the app wasn't there at all. If I have something else running in the background (Firefox, say) it will actually get focus when clicked (and draw part of the browser, say the address bar.) Here's code I added. m = new Message(); m.HWnd = System.Windows.Forms.Application.OpenForms[0].Handle; //top level m.WParam = (IntPtr)0; //disable redraw m.LParam = (IntPtr)0; //unused m.Msg = 11; //wm_setredraw WndProc(ref m); <snip - Application ignores clicks while in this section (in one case) m = new Message(); m.HWnd = System.Windows.Forms.Application.OpenForms[0].Handle; //top level m.WParam = (IntPtr)1; //enable m.LParam = (IntPtr)0; //unused m.Msg = 11; //wm_setredraw WndProc(ref m); System.Windows.Forms.Application.OpenForms[0].Refresh(); Does anyone know if a) there's a way to fix the transparent-application problem here, or b) if I'm doing it wrong in the first place and this should be fixed some other way?

    Read the article

  • Rails 2.3.11 Server Crashing After 4 Requests

    - by Taka
    I have a Rails 2.3.11 application running on my local Windows machine using InstantRails. I cd to my application directory, run ruby script/server to start the server running, and point my browser to localhost:3000. I get the page I expect, and am able to click a few links to other pages (all of them are static). The problem starts when I load the 4th page or so. My server crashes, with this message: Processing HomeController#index (for 127.0.0.1 at 2012-06-23 15:48:40) [GET] Rendering template within layouts/application Rendering home/index Completed in 11ms (View: 9, DB: 1) | 200 OK [http://localhost/index] C:/rails/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.11/lib/active_support/memoizable.rb:46: [BUG] Segmentation fault ruby 1.8.7 (2012-02-08 patchlevel 358) [i386-mingw32] This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. I've uninstalled this gem and reinstalled it, which didn't help. It doesn't seem to be the gem though, because the segmentation fault sometimes occurs in C:/rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:114 or C:/rails/ruby/lib/ruby/1.8/benchmark.rb:306 Versions: >ruby -v ruby 1.8.7 (2012-02-08 patchlevel 358) [i386-mingw32] >rails -v Rails 2.3.11 I'd like to get this fixed so while I'm developing I don't have to keep restarting my server. Any suggestions?

    Read the article

  • Jquery AutoComplete Plugin calling

    - by martinezgeovani
    When I use JQuery's autocomplete and hardcode the array values in the page it works wonderful; but what I need to do is obtain the array values from either a web service or from a public function inside a controller. I have tried various way and can't seem to make it work. The farthest I got is pulling the data in to a long string and when the auto complete results are provided it's the long string which matches, which I understand why. $("#TaskEmailNotificationList").autocomplete("http://localhost/BetterTaskList/Accounts/registeredUsersEmailList", { multiple: true, mustMatch: false, multipleSeparator: ";", autoFill: true }); has anyone encountered this? I am using C#. UPDATE: The below code is a step forward I am now getting an array returned but I think I'm processing it wrong on my page. var emailList = "http://localhost/BetterTaskList/Account/RegisteredUsersEmailList"; $("#TaskEmailNotificationList").autocomplete(emailList, { multiple: true, mustMatch: false, multipleSeparator: ";", autoFill: true }); [HttpGet] public ActionResult RegisteredUsersEmailList() { BetterTaskListDataContext db = new BetterTaskListDataContext(); var emailList = from u in db.Users select u.LoweredUserName; return Json(emailList.ToList(), JsonRequestBehavior.AllowGet); }

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • Better viewing of postfix mail queue files than postcat?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • html5 vs flash - full comparison chart anywhere?

    - by iddqd
    So since Steve Jobs said Flash sucks and implied that HTML5 can do everything Flash can without the need for a Plugin, I keep hearing those exact words from a lot of People. I would really like to have a Chart somewhere (similar to http://en.wikipedia.org/wiki/Comparison_of_layout_engines_%28HTML5%29#Form_elements_and_attributes ) that I can just show to those people. Showing all the little things that Flash can do right now, that HTML5/Ajax/CSS is not yet even thinking about. But of course also the things that HTML5 does better. I would like to see details compared like audio playback, realtime audio processing, byte level access, bitmap data manipulation, webcam access, binary sockets, stuff in the works such as P2P technology (adobe stratus) and all the stuff I don't know about myself. Ideally with examples of what can be accomplished with, lets say Binary Sockets (such as a POP3 client) because otherwise it won't mean a lot to non-programmers since they will just say "well we can do without Binary Sockets". And ideally with some current benchmarks and some examples of websites that use this technology. I've searched the web and am surprised not to find anything. So is there such a comparison somewhere? Or does anybody want to create this and post it to Wikipedia? ;-)

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • How to add an image to an SSRS report with a dynamic url?

    - by jrummell
    I'm trying to add an image to a report. The image src url is an IHttpHandler that takes a few query string parameters. Here's an example: <img src="Image.ashx?item=1234567890&lot=asdf&width=50" alt=""/> I added an Image to a cell and then set Source to External and Value to the following expression: ="Image.ashx?item="+Fields!ItemID.Value+"&lot="+Fields!LotID.Value+"&width=50" But when I view the report, it renders the image html as: <IMG SRC="" /> What am I missing? Update Even if I set Value to "image.jpg" it still renders an empty src attribute. I'm not sure if it makes a difference, but I'm using this with a VS 2008 ReportViewer control in Remote processing mode. Update I was able to get the images to display in the Report Designer (VS 2005) with an absolute path (http://server/path/to/http/handler). But they didn't display on the Report Manager website. I even set up an Unattended Execution Account that has access to the external URLs.

    Read the article

  • jquery select elements between two elements that are not siblings

    - by naugtur
    eg. [I've removed attributes, but it's a bit of auto-generated html] <img class="p"/> <div> hello world <p> <font><font size="2">text.<img class="p"/> some text</font></font> </p> <img class="p"/> <p> <font><font size="2">more text<img class="p"/> another piece of text </font></font> </p><img class="p"/> some text on the end </div> I need to apply some highlighting with backgrounds to all text that is between two closest [in the html code] img.p elements when hovering first of them. I have no idea how to do that. Lets say I hover the first img.p - it should highlight hello world and text. and nothing else. And now the worst part - I need the backgrounds to disappear on mouseleave. I need it to work with any HTML mess possible. The above is just an example and structure of the documents will differ. [tip. Processing the whole html before binding hover and putting some spans etc. is ok as long as it doesn't change the looks of the output document]

    Read the article

  • Working with images (CGImage), exif data, and file icons

    - by Nick
    What I am trying to do (under 10.6).... I have an image (jpeg) that includes an icon in the image file (that is you see an icon based on the image in the file, as opposed to a generic jpeg icon in file open dialogs in a program). I wish to edit the exif metadata, save it back to the image in a new file. Ideally I would like to save this back to an exact copy of the file (i.e. preserving any custom embedded icons created etc.), however, in my hands the icon is lost. My code (some bits removed for ease of reading): // set up source ref I THINK THE PROBLEM IS HERE - NOT GRABBING THE INITIAL DATA CGImageSourceRef source = CGImageSourceCreateWithURL( (CFURLRef) URL,NULL); // snag metadata NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL); // make metadata mutable NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy] autorelease]; // grab exif NSMutableDictionary *EXIFDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy] autorelease]; << edit exif >> // add back edited exif [metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary]; // get source type CFStringRef UTI = CGImageSourceGetType(source); // set up write data NSMutableData *data = [NSMutableData data]; CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)data,UTI,1,NULL); //add the image plus modified metadata PROBLEM HERE? NOT ADDING THE ICON CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) metadataAsMutable); // write to data BOOL success = NO; success = CGImageDestinationFinalize(destination); // save data to disk [data writeToURL:saveURL atomically:YES]; //cleanup CFRelease(destination); CFRelease(source); I don't know if this is really a question of image handling, file handing, post-save processing (I could use sip), or me just being think (I suspect the last). Nick

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • Alternate cause of BadImageFormatException in .NET Assembly?

    - by Phillip Knauss
    I'm working on a .NET 3.5 console application in C# which uses a VC++ unmanaged DLL. It ran without a problem when I worked on it a few weeks ago, but I'm coming back to it today and am now getting a BadImageFormatException ("An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)). My development workstation is running 64bit Windows 7, and I do a fair amount of work with unmanaged code, so I immediately checked that the .NET assembly and the VC++ library both had x86 targets. They did. Just to be sure, I cleaned and rebuilt the VC++ library and the .NET assembly, to no avail. Neither system is doing anything particularly unusual. The VC++ library loads a binary data file and does some mathematical processing on its contents. The .NET assembly has the DllImports for the library and some code to wire it up. This all worked a few weeks ago. So now I'm left wondering if there's some other cause of BadImageFormatException that's less common than an x86/x64 conflict that I might be running into. Thanks.

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Vista 64-bits development tools

    - by Workshop Alex
    Well, okay. There's Visual Studio 2008 and Embarcadero Delphi/Studio that are both able to create 64-bits .NET applications for Vista. And of course a lot of 32-bits applications will run on 64-bits Vista. If not, it's always possible to install VMWare to create a virtual 32-bits Windows XP system to run 32-bits applications. So, plenty of options. But what I would like to see is a list of true 64-bits applications for Windows Vista and better. So if you know any useful 64-bits product, please share! (Especially compilers that generate native 64-bits code.) Tools would basically be anything that would make development a bit easier. Thus, debugging tools, image processing tools to create icons and bitmaps, hex editors to check the contents of binary files, XML editors to change XML files, etc. The tools from SysInternals, for example, seem to provide 64-bits versions or even support 64-bits systems natively. But how about all those other editors, viewers, browsers and other tools that we developers like to use? A 64-bits version of the Norton Commander/Midnight Commander or other file managers would be nice too. And with compilers, how about COBOL/ForTran/ADA/SmallTalk/Lisp/Whatever compiler/languages for Vista? I would just like to see a complete list of anything useful for 64-bits development.

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user

    - by ChelleATL
    I am trying to deploy a report to the Reporting Services Server but keep running up against this error: An error occurred during client rendering. An error has occurred during report processing. (rsProcessingAborted) Cannot create a connection to data source 'dataSource1'. (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user 'DOMAIN\useradmin', error code 0x5. Here’s my situation: Everything is being ran using DOMAIN\useradmin and the report is using a remote database. Reporting Services and SQL Server are both ran under DOMAIN\useradmin. DOMAIN\useradmin is a windows AD login and is part of the server machine Administrators group. My test report is using a data source model that in turn is using a data source that is connection to a database on a different SQL Server. The data source is using “Credentials stored securely in the report server” with the options “Use as Windows credentials when connecting to the data source” and “Impersonate the authenticated user after a connection has been made to the data source.” I am using the credentials of DOMAIN\useradmin that is the db owner of the remote database. DOMAIN\useradmin is assigned the roles, System Administrator, System User and Browser, Content Manager, My Reports, Publisher, Report Builder. So if everything is being run under an über AD account, why I am getting this Could not obtain information about Windows NT group/user 'DOMAIN\useradmin' error? Under normal circumstances , an AD login with Publisher permissions will developing reports using a datasource model created by DOMAIN\useradmin but using one of the remote database’s users which is mapped from yet another AD login. I ran the following statements and non errors were returned: use master go xp_grantlogin 'DOMAIN\useradmin' go xp_logininfo 'DOMAIN\useradmin' go

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >