Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 668/903 | < Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >

  • How to look for different types of files in a directory?

    - by herrow
    public List<string> MapMyFiles() { List<FileInfo> batchaddresses = new List<FileInfo>(); foreach (object o in lstViewAddresses.Items) { try { string[] files = Directory.GetFiles(o.ToString(), "*-E.esy"); files.ToList().ForEach(f => batchaddresses.Add(new FileInfo(f))); } catch { if(MessageBox.Show(o.ToString() + " does not exist. Process anyway?", "Continue?", MessageBoxButtons.YesNo) == DialogResult.Yes) { } else { Application.Exit(); } } } return batchaddresses.OrderBy(f => f.CreationTime) .Select(f => f.FullName).ToList(); } i would like to add to the array not only .ESY but also "p-.csv" how do i do this?

    Read the article

  • How can I save/print values in my WATCH list in Visual Studio 2008?

    - by Rising Star
    When I attach the Visual Studio 2008 debugger to my web server process, I sometimes browse a large amount of data in my watch list. Suppose I have an array of string variable that I expand to show 20 entries. It seems that the only way to save these values is to copy and paste them one at a time. I have gone so far as to do a screen-shot in order to have a record of what the values were (to refer to later or print a hard copy). Is there an easy way to save and print these values? I am familiar with the new "IntelliTrace" feature in Visual Studio 2010 Ultimate, but it seems overkill for this purpose. I just want to take something like an array of strings and save it for later reference once I've stopped the debugger. What's a good way to do this?

    Read the article

  • UITableView headings shown on top of MBProgressHUD

    - by Chris Ballinger
    So I have a subclass of UITableViewController that loads some data from the internet and uses MBProgressHUD during the loading process. I use the standard MBProgressHUD initialization. HUD = [[MBProgressHUD alloc] initWithView:self.view]; [self.view addSubview:HUD]; HUD.delegate = self; HUD.labelText = @"Loading"; [HUD show:YES]; This is the result: . Is there any way to resolve this issue, or should I just abandon MBProgressHUD? Thanks!

    Read the article

  • Get the URL(s) from Firefox using C# .NET 3.5

    - by ghawkes
    I am writing a .NET 3.5 WPF application in C#. This application needs to be able to get the URL(s) out of the browser window when it is in the foreground. I already have the code working that handles a global Windows hotkey and then checks to see if the foreground IntPtr is from a browser. If so, I am able to obtain the System.Diagnostics.Process objects that maps to the browser. At this point, I would like to obtain the URL(s) from the browser. Thank you, G

    Read the article

  • Several modules in a package importing one common module

    - by morpheous
    I am writing a python package. I am using the concept of plugins - where each plugin is a specialization of a Worker class. Each plugin is written as a module (script?) and spawned in a separate process. Because of the base commonality between the plugins (e.g. all extend a base class 'Worker'), The plugin module generally looks like this: import commonfuncs def do_work(data): # do customised work for the plugin print 'child1 does work with %s' % data In C/C++, we have include guards, which prevent a header from being included more than once. Do I need something like that in Python, and if yes, how may I make sure that commonfuncs is not 'included' more than once?

    Read the article

  • Android - Opening phone deletes app state

    - by Tom G
    Hey everyone, I'm writing an android application that maintains a lot of "state" data...some of it I can save in the form of onSaveInstanceState but some of it is just to complex to save in memory. My problem is that sliding the phone open destroys/recreates the app, and I lose all my application state in the process. The same thing happens with the "back" button, but I overloaded that function on my way. Is there any way to overload the phone opening to prevent it from happening? Thanks in advance.

    Read the article

  • Datagridview retains waitcursor when updated from thread

    - by Apeksha
    I have a DataGridView control in my Windows Forms Application. I am adding rows to the grid using a background thread. I change the form's cursor to Waitcursor when the process starts and back to Default when it ends. This works well for the form, but not for the grid. When the form's cursor is changed back to default, the grid's cursor does not change, although the cursor over the rest of the form does. Does this have anything to do with the fact that I am updating the grid from a background thread? (The cursor is being changed from the UI thread).

    Read the article

  • Modal forms get in the way of processing

    - by Botax
    I’m working on an interface in VB6 to interact with a sound editor to automate certain tasks mainly using the editor’s object handles and activating them through SendMessage/PostMessage. In general it works OK, except that the editor has some dialog boxes that open in modal mode and freeze everything on the interface, including the timers. Is there a practical way to get these dialog boxes to open modeless or to interact with them from the interface after they pop up? I tried an MDI form, but it also freezes along with everything else. The only way to override the modal mode of these boxes is to launch an independent applet beforehand to address the dialog boxes with a timer, but the process is somewhat cumbersome. All I need to do with the dialog boxes is click the OK button or hit the return key.

    Read the article

  • How to generalize view?

    - by MexicanHacker
    Hey guys, I'm in the process of designing a set of views that use the same model in my app, the difference is that some views will differ in Read Only and Modifiable fields. So for example for view A I want to be able to modify A.One but no A.Two properties and for view B I want to have B.One and B.Two as modifiable fields. I was thinking in having a map that will hold this information and iterate both the modifiable and non-modifiable lists in a generic view, but I was thinking that maybe I can get feedback from you guys. What do you think?

    Read the article

  • Ajax response seems to be getting lost

    - by Ringo Blancke
    I'm using the ddslick jquery dropdown plugin in conjunction with my Rails app. In view1, I have $('#challenges_dropdown').ddslick({ <snipped> onSelected: function (data) { $.ajax({ url: "/load_data", type: "GET", data: {"id": data.selectedData.value} }); } }); I.e., I make a call to my controller to load_data. The controller receives this correctly and then at the end, makes a call to render a separate view render "data" This view contains a script snippet that needs to run in order to update some elements of my original view. For some reason, this script snippet is just not running. I'm very confused. When I use a regular link with data-remote="true", then the whole process works perfectly. However, when I make an AJAX call, it fails. What's going on?? Thanks! Ringo

    Read the article

  • Sharepoint 2010 - Managed KeyWords

    - by Audioillity
    Hi, Is it possible to import managed keywords into SharePoint 2010? Where are the keywords stored within which database? Background - I'm currently working on a migration from a legacy system into SharePoint 2010. So far everything is going well, and I can even bring across the managed meta data across along with most other data. The process I use was built for SharePoint 2007 to update Lists over SOAP. With a few manual tweaks I've managed to get the metadata to come across. To bring across either managed metadata or managed keywords I need to know the ID for the existing label/keyword. I have this for the Managed Metadata however not for the Managed Keyword. Currently I create a CSV file to be imported for managed metadata before working out the reverent GUID for the source label. Many Thanks Luke

    Read the article

  • Clear the form once form submitted

    - by zurna
    Once the form submitted, response from another page is printed to #GameStorySys. But values entered to the form still stays there. Is it possible for the form values to disappear (but the form should still stay) once the form submitted? $("[name='GameStoryForm']").click(function() { $.ajax({ type: "POST", data: $("#GameStoryForm").serialize(), url: "content/commentary/index.cs.asp?Process=EditLiveCommentaryStory&CommentaryID=<%=Request.QueryString("CommentaryID")%>", success: function(output) { $('#GameStorySys').html(output); }, error: function(output) { $('#GameStorySys').html(output); } }); });

    Read the article

  • Question creating PDF document in Zend Framework

    - by deaddancer
    I need to take a ZF rendered view and create a PDF that should look pretty much exactly the same, and email it. The major issue I have right now is getting the HTML created by the view into a string that I can then process with the Zend_PDF::parse method. The view I need to turn into a PDF is the result of a posted form. I've tried grabbing the contents of ob_get_contents into a string after a successful post, but for some reason its not in there. Should I press on with this angle? Any help would be greatly appreciated!

    Read the article

  • Emacs and Long Shell Commands

    - by darrint
    Is there a way to run a shell command, have the output show up in a new buffer and have that output show up incrementally? Eshell and other emacs terminal emulators do a find job of this but I see no way to script them. What I'd like to do is write little elisp functions to do stuff like run unit tests, etc. and watch the output trickle into a buffer. The elisp function shell-command is close to what I want but it shows all the output at once when the process finishes.

    Read the article

  • Performance of Serialized Objects in C++

    - by jm1234567890
    Hi Everyone, I'm wondering if there is a fast way to dump an STL set to disk and then read it back later. The internal structure of a set is a binary tree, so if I serialize it naively, when I read it back the program will have to go though the process of inserting each element again. I think this is slow even if it is read back in correct order, correct me if I am wrong. Is there a way to "dump" the memory containing the set into disk and then read it back later? That is, keep everything in binary format, thus avoiding the re-insertion. Do the boost serialization tools do this? Thanks! EDIT: oh I should probably read, http://www.parashift.com/c++-faq-lite/serialization.html I will read it now... no it doesn't really help

    Read the article

  • Passing a multi-line string as an argument to a script in Windows

    - by Zack Mulgrew
    I have a simple python script like so: import sys lines = sys.argv[1] for line in lines.splitlines(): print line I want to call it from the command line (or a .bat file) but the first argument may (and probably will) be a string with multiple lines in it. How does one do this? Of course, this works: import sys lines = """This is a string It has multiple lines there are three total""" for line in lines.splitlines(): print line But I need to be able to process an argument line-by-line. EDIT: This is probably more of a Windows command-line problem than a Python problem. EDIT 2: Thanks for all of the good suggestions. It doesn't look like it's possible. I can't use another shell because I'm actually trying to invoke the script from another program which seems to use the Windows command-line behind the scenes.

    Read the article

  • How do I execute an action in drupal after each time a node is saved?

    - by ford
    I'm developing an Action in Drupal which is supposed to activate after saving a node, exporting content to XML (which includes data from the node that was just saved), using the "Trigger: After saving an updated post" trigger. Unfortunately this action actually happens right before the information from the recently saved post is saved to the database. ie. when looking at the XML later, I find that the most recent change I made was not included. Saving after editing a different node will restore the previously missing data. How can I get my action to fire after the saving process is complete?

    Read the article

  • Real time App with Facebook

    - by Casebash
    Does Facebook provide access to any real time APIs so that you can respond to events as soon as they happen? If not, what alternatives are there and what are their limitations? For example, if I use polling instead, will they limit my api calls? And if I try using RSS feeds, about how much delay can I expect? Or maybe it would be possible to receive and process email notifications (if I could convince a user to forward mail to another email address), as they seem to be dispatched pretty promptly?

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

  • What's the most efficient way to manage large datasets with Javascript/jQuery in IE?

    - by RenderIn
    I have a search that returns JSON, which I then transform into a HTML table in Javascript. It repeatedly calls the jQuery.append() method, once for each row. I have a modern machine, and the Firefox response time is acceptable. But in IE 8 it is unbearably slow. I decided to move the transformation from data to HTML into the server-side PHP, changing the return type from JSON to HTML. Now, rather than calling the jQuery.append() time repeatedly, I call the jQuery.html() method once with the entire table. I noticed Firefox got faster, but IE got slower. These results are anecdotal and I have not done any benchmarking, but the IE performance is very disappointing. Is there something I can do to speed up the manipulation of large amounts of data in IE or is it simply a bad idea to process very much data at once with AJAX/Javascript?

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • fancybox - passing 'this' to onClosed function

    - by Phil Jackson
    Hi this is probly really simple but I juast cant seem to figure it out! if( is_logged_out( html ) ) { var throughClick = $(this); $.fancybox( html, { 'autoDimensions' : false, 'width' : 'auto', 'height' : 'auto', 'transitionIn' : 'none', 'transitionOut' : 'none', 'hideOnOverlayClick' : false, 'showCloseButton' : false, 'onClosed' : function( throughClick ) { alert(throughClick.attr('name')); throughClick.trigger('click'); } }); }else{ All i want to do is pass the object of whichever button or link that was clicked, so once the user has logged back in it will process again. Any help is much appreciated.

    Read the article

  • parallel.foreach with custom collection

    - by SchwartzE
    I am extending the System.Net.Mail.MailAddress class to include an ID field, so I created a new custom MailAddress class that inherited from the existing class and a new custom MailAddressCollection class. I then overrode the existing System.Net.Mail.MailMessage.To to use my new collection. I would like to process the recipients in parallel, but I can't get the syntax right. This is the syntax I am using. Parallel.ForEach(EmailMessage.To, (MailAddress address) => { emailService.InsertRecipient(emailId, address.DisplayName, address.Address, " "); }); I get the following errors: The best overloaded method match for 'System.Threading.Tasks.Parallel.ForEach(System.Collections.Generic.IEnumerable, System.Action)' has some invalid arguments Argument 1: cannot convert from 'EmailService.MailAddressCollection' to 'System.Collections.Generic.IEnumerable' What syntax do I need to use custom collections?

    Read the article

  • Mercurial adding new file gives no match found error

    - by Ivo
    I have a strange problem with updating Mercurial. Everytime when I add a file to my repository and then update another location of the repository (for example with in CI process), the error "no match found" occures. Then when I remove to whole folder and clone it again there are no problems and the new added file(s) are there. Updating and removing doesnt give problems When I do "a" Verify the following is shown: data/test.txt.i@54: missing revlog! 54: empty or missing test.txt test.txt@54: b80de5d13875 in manifests not found 3 integrity errors encountered! (first damaged changeset appears to be 54) Any idea what could be causing this?

    Read the article

  • How to handle added input data colums without having to maintain multiple versions of SSIS packages?

    - by GLFelger
    I’m writing to solicit ideas for a solution to an upcoming problem. The product that provides data to our ETL process currently has multiple versions. Our clients are all using some version of the product, but not all use the same version and they will not all be upgraded at the same time. As new versions of the product are rolled out, the most common change is to add new data columns. Columns being dropped or renamed may happen occasionally, but our main focus right now is how to handle new columns being added. The problem that we want to address is how to handle the data for clients who use an older version of the product. If we don’t account for the new columns in our SSIS packages, then the data in those columns for clients using an older product version will not be processed. What we want to avoid is having to maintain a separate version of the SSIS packages for each version of the product. Has anyone successfully implemented a solution for this situation?

    Read the article

< Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >