Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 284/338 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • how to combine widget webapp framework with SEO-friendly CSS and JS files

    - by Ali
    Hi guys, I'm writing a webapp using Zend framework and a homebrew widget system. Every widget has a controller and can choose to render one of many views if it chooses. This really helps us modularize and reconfigure and reuse the widgets anywhere on the site. The Problem is that the views of each widget contain their own JS and CSS code, which leads to very messy HTML code when the whole page is put together. You get pockets of style and script tags everywhere. This is bad for a lot of different reasons as I'm sure you know, but it has a profound effect on our SEO as well. Several solutions that I've been able to come up with: Separate the CSS and JS of every view of every widget into its own file - this has serious drawbacks for load times (many more resources have to be loaded separately) and it makes coding very difficult as now you have to have 3-4 files open just to edit a widget. combine the all the widget CSS into a single file (same with JS) - would also lead to a massive load when someone enters the site, mixes up the CSS and the JS for all widgets so it's harder to keep track of them, and other problems that I'm sure you can think of. Create a system that uses method 1 (separate CSS and JS for every widget), when delivering the page, stitches all CSS and JS together. This obviously needs more processing time and of course the creation of such a system, etc. My Question is what you guys think of these solutions or if there are pre-existing solutions that you know of (or any tech that might help) solve this problem. I really appreciate all of your thoughts and comments!! Thanks guys, Ali

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • "User Friendly" .net compatible Regex/Text matching tools?

    - by Binary Worrier
    Currently in our software we provide a hook where we call a DLL built by our clients to parse information out of documents we are processing (the DLL takes in some text (or a file) and returns a list of name/value pairs). e.g. We're given a Word doc or Text file to Archive. We do various things to the file, and call a DLL that will return "pertinent" information about the file. Among other things we store that "pertinent" data for posterity. What is considered "pertinent" depends on the client and the type of the document, we don't care, we get it and store it. I've been asked to develop a user friendly "something" that will allow a non-programmer user to "configure" how to get this data from a plain text document (<humor>The user story ends with the helpful suggestion/query "We could use regex for this?"</humor>) It's safe to assume that a list of regex's isn't going to cut this, I've written some of these parsers for customers, the regex's to do these would be hedious and some of them can't be done by regex's. Also one of the requirements above is "user friendly" which negates anything that has users seeing or editing regex expressions. As you can guess, I don't have a fortune of time to do this, and am wondering is there anything out there that I can plug in to our app that has a nice front end and does exactly what I need? :) No? Whadda mean no! . . . sigh Ok then failing that, anything out there that "visually" builds regex's and/or other pattern matching expressions, and then allows one to run those expressions against some text? The MS BRE will do what I want, but I need something prettier that looks less like code. Thanks guys,

    Read the article

  • Better viewing of postfix mail queue files than postcat?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • iPhone accelerometer:didAccelerate: seems to not get called while I am running a loop

    - by AJ
    An accelerometer related question. (Sorry the formatting may not look right, its the first time I am using this site). I got the accelerometer working as expected using the standard code UIAccelerometer *accel = [UIAccelerometer sharedAccelerometer]; accel.delegate = self; accel.updateInterval = 0.1; //I also tried other update values I use NSLog to log every time the accelerometer:didAccelerate: method in my class is called. The function gets called as expected and everything works fine till here. However, when I run a loop, the above method doesn't seem to get called. Something like this float firstAccelValue = globalAccel; //this is the x-accel value (stored in a global by the above method) float nextAccelValue = firstAccelValue; while (nextAccelValue == firstAccelValue){ //do something nextAccelValue = globalAccel; // note globalAccel is updated by the accelerometer method } The above loop never exits, expectedly since the accelerometer:didAccelerate: method is not getting called, and hence globalAccel never changes value. If I use a fixed condition to break the while loop, I can see that after the loop ends, the method calls work fine again. Am I missing something obvious here? Or does the accelerometer method not fire when certain processing is being done? Any help would be greatly appreciated! Thanks!

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • jquery select elements between two elements that are not siblings

    - by naugtur
    eg. [I've removed attributes, but it's a bit of auto-generated html] <img class="p"/> <div> hello world <p> <font><font size="2">text.<img class="p"/> some text</font></font> </p> <img class="p"/> <p> <font><font size="2">more text<img class="p"/> another piece of text </font></font> </p><img class="p"/> some text on the end </div> I need to apply some highlighting with backgrounds to all text that is between two closest [in the html code] img.p elements when hovering first of them. I have no idea how to do that. Lets say I hover the first img.p - it should highlight hello world and text. and nothing else. And now the worst part - I need the backgrounds to disappear on mouseleave. I need it to work with any HTML mess possible. The above is just an example and structure of the documents will differ. [tip. Processing the whole html before binding hover and putting some spans etc. is ok as long as it doesn't change the looks of the output document]

    Read the article

  • Disabling redraw in WinForms app

    - by Ryan
    I'm working on a C#.Net application which has a somewhat annoying bug in it. The main window has a number of tabs, each of which has a grid on it. When switching from one tab to another, or selecting a different row in a grid, it does some background processing, and during this the menu flickers as it's redrawn (File, Help, etc menu items as well as window icon and title). I tried disabling the redraw on the window while switching tabs/rows (WM_SETREDRAW message) at first. In one case, it works perfectly. In the other, it solves the immediate bug (title/menu flicker), but between disabling the redraw and enabling it again, the window is "transparent" to mouse clicks - there's a small window (<1 sec) in which I can click and it will, say, highlight an icon on my desktop, as if the app wasn't there at all. If I have something else running in the background (Firefox, say) it will actually get focus when clicked (and draw part of the browser, say the address bar.) Here's code I added. m = new Message(); m.HWnd = System.Windows.Forms.Application.OpenForms[0].Handle; //top level m.WParam = (IntPtr)0; //disable redraw m.LParam = (IntPtr)0; //unused m.Msg = 11; //wm_setredraw WndProc(ref m); <snip - Application ignores clicks while in this section (in one case) m = new Message(); m.HWnd = System.Windows.Forms.Application.OpenForms[0].Handle; //top level m.WParam = (IntPtr)1; //enable m.LParam = (IntPtr)0; //unused m.Msg = 11; //wm_setredraw WndProc(ref m); System.Windows.Forms.Application.OpenForms[0].Refresh(); Does anyone know if a) there's a way to fix the transparent-application problem here, or b) if I'm doing it wrong in the first place and this should be fixed some other way?

    Read the article

  • Matching a Repeating Sub Series using a Regular Expression with PowerShell

    - by Hinch
    I have a text file that lists the names of a large number of Excel spreadsheets, and the names of the files that are linked to from the spreadsheets. In simplified form it looks like this: "Parent File1.xls" Link: ChildFileA.xls Link: ChildFileB.xls "ParentFile2.xls" "ParentFile3.xls" Blah Link: ChildFileC.xls Link: ChildFileD.xls More Junk Link: ChildFileE.xls "Parent File4.xls" Link: ChildFileF.xls In this example, ParentFile1.xls has embedded links to ChildFileA.xls and ChildFileB.xls, ParentFile2.xls has no embedded links, and ParentFile3.xls has 3 embedded links. I am trying to write a regular expression in PowerShell that will parse the text file producing output in the following form: ParentFile1.xls:ChildFileA.xls,ChildFileB.xls ParentFile3.xls:ChildFileC.xls,ChildFileD.xls,ChildFileE.xls etc The task is complicated by the fact that the text file contains a lot of junk between each of the lines, and a parent may not always have a child. Furthermore, a single file name may pass over multiple lines. However, it's not as bad as it sounds, as the parent and child file names are always clearly demarcated (the parent with quotes and the child with a prefix of Link: ). The PowerShell code I've been using is as follows: $content = [string]::Join([environment]::NewLine, (Get-Content C:\Temp\text.txt)) $regex = [regex]'(?im)\s*\"(.*)\r?\n?\s*(.*)\"[\s\S]*?Link: (.*)\r?\n?' $regex.Matches($content) | %{$_.Groups[1].Value + $_.Groups[2].Value + ":" + $_.Groups[3].Value} Using the example above, it outputs: ParentFile1.xls:ChildFileA.xls ParentFile2.xls""ParentFile3.xls:ChildFileC.xls ParentFile4.xls:ChildFileF.xls There are two issues. Firstly, the inclusion of the "" instead of a newline whenever a Parent without a Child is processed. And the second issue, which is the most important, is that only a single child is ever shown for each parent. I'm guessing I need to somehow recursively capture and display the multiple child links that exist for each parent, but I'm totally stumped as to how to do this with a regular expression. Amy help would be greatly appreciated. The file contains 100's of thousands of lines, and manual processing is not an option :)

    Read the article

  • I get an Exception when trying to implement reset password with Authlogic

    - by Tam
    Hi, I'm using Ruby on Rails 2.3.5 and Ruby 1.9 and implmeneted Authlogic as my authentication engine. Authlogic works fine. But now I have been trying to implement password reset using the following tutorial: http://www.binarylogic.com/2008/11/16/tutorial-reset-passwords-with-authlogic/ But I'm getting the following exception when I type mysite.com/password_resets/new Processing PasswordResetsController#new (for 127.0.0.1 at 2010-03-13 01:09:45) [GET] Completed in 9ms (DB: 0) | 200 [http://localhost/password_resets/new] [2010-03-13 01:09:45] ERROR TypeError: can't convert Array into String /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/string/output_safety.rb:34:in `concat' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/core_ext/string/output_safety.rb:34:in `concat_with_safety' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/webrick.rb:63:in `block in service' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/response.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/response.rb:158:in `each' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:16:in `send' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:16:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:22:in `method_missing' /Users/tammam56/.rvm/gems/ruby-1.9.1-p378/gems/rack-1.0.1/lib/rack/handler/webrick.rb:62:in `service' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/httpserver.rb:111:in `service' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/httpserver.rb:70:in `run' /Users/tammam56/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/webrick/server.rb:183:in `block in start_thread' Honestly I'm not sure where to start debugging for this. Is it because I'm using Ruby 1.9? some people seem to be having trouble with Authlogic and Ruby 1.9: http://isitruby19.com/authlogic Please advise me how to go about solving/debugging this issues? Thanks, Tam

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • Strategies for serializing an object for auditing/logging purpose in .NET?

    - by Jiho Han
    Let's say I have an application that processes messages. Messages are just objects in this case that implements IMessage interface which is just a marker. In this app, if a message fails to process, then I want to log it, first of all for auditing and troubleshooting purposes. Secondly I might want to use it for re-processing. Ideally, I want the message to be serialized into a format that is human-readable. The first candidate is XML although there are others like JSON. If I were to serialize the messages as XML, I want to know whether the message object is XML-serializable. One way is to reflect on the type and to see if it has a parameter-less constructor and the other is to require IXmlSerializable interface. I'm not too happy with either of these approaches. There is a third option which is to try to serialize it and catch exceptions. This doesn't really help - I want to, in some way, stipulate that IMessage (or a derived type) should be xml-serializable. The reflection route has obvious disadvantages such as security, performance, etc. IXmlSerializable route locks down my messages to one format, when in the future, I might want to change the serialization format to be JSON. The other thing is even the simplest objects now must implement ReadXml and WriteXml methods. Is there a route that involves the least amount of work that lets me serialize an arbitrary object (as long as it implements the marker interface) into XML but not lock future messages into XML?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • What is the best way to determine the path to the ISV directory?

    - by Luke Baulch
    MSCRM 4.0 Problem: I'm currently storing xml files in the ISV directory along with my web applications. From a plugin (or potentially a seperate app), I need to find an easy way to navigate to the ISV directory to read these xml files. This routine will be called extremely often, so processing minimization should be a strong consideration. Potential solutions: Registry: There is a registry key called 'WebSitePath' with the data 'C:\Inetpub\wwwroot\CRM'. Could potentially use this to build the path. (Will this be the same on all systems/installations?) IIS directory data: Looping through the DirectoryEntries of path '"IIS://localhost/W3SVC"' I could obtain the the web application where description is equal to "Microsoft Dynamics CRM". (Will this be the same on all systems/installations?) Webservice: Create one to read and return the data contained in these xml files The webservice would have easy access to its executing directory. Database: Store the data of these files in the database. Help: Can anyone suggest a simpler solution to obtaining and reading a file from the ISV directory? If not, which of the above solutions would be the quickest to process? Thanks for any and all contributions.

    Read the article

  • suggestions for a 3D graph rendering library?

    - by Sandro
    Hello coders! So I'm not sure how stackoverflow friendly this question is since it doesn't have a quick clear cut answer but here we go... I have a java program that generates data for a directed graph. Now I need to render this graph. The data needs to be laid out in 3D, and I want to be able to define which plane an edge lives in. (Each edge will only need to occupy 1 plane of the 3D space). I also need the ability to navigate around the graph. Since I know that this kind of stuff is hard, I'm going shopping. So far I've looked into (In no particular order): JUNG: lacks 3D support Cytoscape: not sure how much I'll be able to define edge drawing, haven't seen a non bio-informatics application of it yet JGraph: I didn't see any 3D applications yet Perfuse: looks promising, does anyone know anything else about it? Gephi: Documentation looks scarce Processing: does this play well with java? I'm also considering doing some combination of opengl + swing rendering to create a 3D graph from multiple 2D graphs. I am also not adverse to the idea of linking from another language Any Ideas? Thank you.

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Working with images (CGImage), exif data, and file icons

    - by Nick
    What I am trying to do (under 10.6).... I have an image (jpeg) that includes an icon in the image file (that is you see an icon based on the image in the file, as opposed to a generic jpeg icon in file open dialogs in a program). I wish to edit the exif metadata, save it back to the image in a new file. Ideally I would like to save this back to an exact copy of the file (i.e. preserving any custom embedded icons created etc.), however, in my hands the icon is lost. My code (some bits removed for ease of reading): // set up source ref I THINK THE PROBLEM IS HERE - NOT GRABBING THE INITIAL DATA CGImageSourceRef source = CGImageSourceCreateWithURL( (CFURLRef) URL,NULL); // snag metadata NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL); // make metadata mutable NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy] autorelease]; // grab exif NSMutableDictionary *EXIFDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy] autorelease]; << edit exif >> // add back edited exif [metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary]; // get source type CFStringRef UTI = CGImageSourceGetType(source); // set up write data NSMutableData *data = [NSMutableData data]; CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)data,UTI,1,NULL); //add the image plus modified metadata PROBLEM HERE? NOT ADDING THE ICON CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) metadataAsMutable); // write to data BOOL success = NO; success = CGImageDestinationFinalize(destination); // save data to disk [data writeToURL:saveURL atomically:YES]; //cleanup CFRelease(destination); CFRelease(source); I don't know if this is really a question of image handling, file handing, post-save processing (I could use sip), or me just being think (I suspect the last). Nick

    Read the article

  • How to add an image to an SSRS report with a dynamic url?

    - by jrummell
    I'm trying to add an image to a report. The image src url is an IHttpHandler that takes a few query string parameters. Here's an example: <img src="Image.ashx?item=1234567890&lot=asdf&width=50" alt=""/> I added an Image to a cell and then set Source to External and Value to the following expression: ="Image.ashx?item="+Fields!ItemID.Value+"&lot="+Fields!LotID.Value+"&width=50" But when I view the report, it renders the image html as: <IMG SRC="" /> What am I missing? Update Even if I set Value to "image.jpg" it still renders an empty src attribute. I'm not sure if it makes a difference, but I'm using this with a VS 2008 ReportViewer control in Remote processing mode. Update I was able to get the images to display in the Report Designer (VS 2005) with an absolute path (http://server/path/to/http/handler). But they didn't display on the Report Manager website. I even set up an Unattended Execution Account that has access to the external URLs.

    Read the article

  • Sudden issues reading uncompressed video using opencv

    - by JohnSavage
    I have been using a particular pipeline to process video using opencv to encode uncompressed video (fourcc = 0), and opencv python bindings to then open and work on these files. This has been working fine for me on OpenCV 2.3.1a on Ubuntu 11.10 until just a few days ago. For some reason it currently is only allowing me to read the first frame of a given file the first time I open that file. Further frames are not read, and once I touch the file once with my program, it then cannot even read the first frame. More detail: I created the uncompressed video files as follows: out_video.open(out_vid_name, 0, // FOURCC = 0 means record raw fps, Size(640, 480)) Again, these videos worked fine for me until about a week ago. Now, when I try to open one of these I get the following message (from what I think is ffmpeg): Processing video.avi Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. [avi @ 0x29251e0] parser not found for codec rawvideo, packets or times may be invalid. It reads and displays the first frame fine, but then fails to read the next frame. Then, when I try to run my code on the same video, the capture still opens with the same message as above. However, it cannot even read the very first frame. Here is the code to open the capture: self.capture = cv2.VideoCapture(filename) if not self.capture.isOpened() print "Error: could not open capture" sys.exit() Again, this part is passed without any issue, but then the break happens at: success, rgb = self.capture.read() if not success: print "error: could not read frame" return False This part breaks at the second frame on the first run of the video file, and then on the first frame on subsequent runs. I really don't know where to even begin debugging this. Please help!

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • C# LINQ to XML nissing space character.

    - by Fossaw
    I write an XML file "by hand", (i.e. not with LINQ to XML), which sometimes includes an open/close tag containing a single space character. Upon viewing the resulting file, all appears correct, example below... <Item> <ItemNumber>3</ItemNumber> <English> </English> <Translation>Ignore this one. Do not remove.</Translation> </Item> ... the reasons for doing this are various and irrelevent, it is done. Later, I use a C# program with LINQ to XML to read the file back and extract the record... XElement X_EnglishE = null; // This is CRAZY foreach (XElement i in Records) { X_EnglishE = i.Element("English"); // There is only one damned record! } string X_English = X_EnglishE.ToString(); ... and test to make sure it is unchanged from the database record. I detect a change, when processing Items where the field had the single space character... +E+ Text[3] English source has been altered: Was: >>> <<< Now: >>><<< ... the and <<< parts I added to see what was happening, (hard to see space characters). I have fiddled around with this but can't see why this is so. It is not absolutely critical, as the field is not used, (yet), but I cannot trust C# or LINQ or whatever is doing this, if I do not understand why it is so. So what is doing that and why?

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user

    - by ChelleATL
    I am trying to deploy a report to the Reporting Services Server but keep running up against this error: An error occurred during client rendering. An error has occurred during report processing. (rsProcessingAborted) Cannot create a connection to data source 'dataSource1'. (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user 'DOMAIN\useradmin', error code 0x5. Here’s my situation: Everything is being ran using DOMAIN\useradmin and the report is using a remote database. Reporting Services and SQL Server are both ran under DOMAIN\useradmin. DOMAIN\useradmin is a windows AD login and is part of the server machine Administrators group. My test report is using a data source model that in turn is using a data source that is connection to a database on a different SQL Server. The data source is using “Credentials stored securely in the report server” with the options “Use as Windows credentials when connecting to the data source” and “Impersonate the authenticated user after a connection has been made to the data source.” I am using the credentials of DOMAIN\useradmin that is the db owner of the remote database. DOMAIN\useradmin is assigned the roles, System Administrator, System User and Browser, Content Manager, My Reports, Publisher, Report Builder. So if everything is being run under an über AD account, why I am getting this Could not obtain information about Windows NT group/user 'DOMAIN\useradmin' error? Under normal circumstances , an AD login with Publisher permissions will developing reports using a datasource model created by DOMAIN\useradmin but using one of the remote database’s users which is mapped from yet another AD login. I ran the following statements and non errors were returned: use master go xp_grantlogin 'DOMAIN\useradmin' go xp_logininfo 'DOMAIN\useradmin' go

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Yet another Python Windows CMD mklink problem ... can't get it to work!

    - by Felix Dombek
    OK I have just posted another question which outlined my program but the specific problem was different. Now, my program just stops working without any message whatsoever. I'd be grateful if someone could help me here. I want to create symlinks for each file in a directory structure, all in one large flat folder, and have the following code by now: # loop over directory structure: # for all items in current directory, # if item is directory, recurse into it; # else it's a file, then create a symlink for it def makelinks(folder, targetfolder, cmdprocess = None): if not cmdprocess: cmdprocess = subprocess.Popen("cmd", stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE) print(folder) for name in os.listdir(folder): fullname = os.path.join(folder, name) if os.path.isdir(fullname): makelinks(fullname, targetfolder, cmdprocess) else: makelink(fullname, targetfolder, cmdprocess) #for a given file, create one symlink in the target folder def makelink(fullname, targetfolder, cmdprocess): linkname = os.path.join(targetfolder, re.sub(r"[\/\\\:\*\?\"\<\>\|]", "-", fullname)) if not os.path.exists(linkname): try: os.remove(linkname) print("Invalid symlink removed:", linkname) except: pass if not os.path.exists(linkname): cmdprocess.stdin.write("mklink " + linkname + " " + fullname + "\r\n") So this is a top-down recursion where first the folder name is printed, then the subdirectories are processed. If I run this now over some folder, the whole thing just stops after 10 or so symbolic links. Here is the output: D:\Musik\neu D:\Musik\neu\# Electronic D:\Musik\neu\# Electronic\# tag & reencode D:\Musik\neu\# Electronic\# tag & reencode\ChillOutMix D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B 2 The program still seems to run but no new output is generated. It created 9 symlinks for some files in the # tag & reencode and the first three files in the ChillOutMix folder. The cmd.exe Window is still open and empty, and shows in its title bar that it is currently processing the mklink command for the third file in ChillOutMix. I tried to insert a time.sleep(2) after each cmdprocess.stdin.write in case Python is just too fast for the cmd process, but it doesn't help. Does anyone know what the problem might be?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >