Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 652/924 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • C# - retrieve file path from config file - @ doesn't do it's magic

    - by Bart
    Hi guys, I'm currently working on a web service that retrieves an XML message, archives it and then processes it further. The archive folder is read from the Web.config. This is what the archive method looks like private void Archive(System.Xml.XmlDocument xmlDocument) { try { string directory = System.Configuration.ConfigurationManager.AppSettings.Get("ArchivePath"); ParseMessage(xmlDocument); directory = string.Format(@"{0}\{1}\{2}", @directory, _senderService, DateTime.Now.ToString("MMMyyyy")); System.IO.Directory.CreateDirectory(directory); string Id = _messageID; string senderService = _senderService; xmlDocument.Save(directory + @"\" + DateTime.Now.ToString("yyyyMMdd_") + Id + "_" + System.Guid.NewGuid().ToString().Substring(0, 13) + ".xml"); } The path structure I retrieve is C:\Program Files\Subfolder\Subfolder. In the development, QA, UAT and PRD environments everything works fine. But on another machine I now need to install the web service on (which I cannot debug, unfortunately), the directory string is 'C:Files'. Just to be sure I double checked the .NET version on the different machines (I thought perhaps the usage of @ before a string was version-dependent); all machines use 2.0.50727. Does anyone recognize this problem? Thanks in advance!

    Read the article

  • Make JFace Window blink in taskbar or get users attention?

    - by Sophomore
    Hi folks I wonder someone has any idea how to solve this: In my Java Eclipse plugin there are some processes which take some time. Therefore the user might minimize the window and let the process run in the background. Now, when the process is finished, I can force the window to come to the top again, but that is a no-no in usability. I'd rather want the process to blink in the taskbar instead. Is there any way to achieve this? I had a look at the org.eclipse.jface.window but could'nt find anything like that, same goes for the SWT documentation... Another thing which came to my mind - as people are using this app on mac os x and linux as well, is there a platform independant solution, which will inform the user that the process has finished but without bringing the window to the top? Any ideas are highly welcome! Edit: I found out that on windows the user can adjust whether he would like to enable a force to foreground or not. If that option is disabled, the task will just blink in the taskbar... Here's a good read on that... If anyone maybe knows about some platform independant way of achieving this kind of behaviour, please share your knowledge with me!

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • ListBoxFor not populating with selected items

    - by user576838
    I've see this question asked a couple of other times, and I followed this after I tried things on my own with the MS music store demo to no avail, but I still can't get this to work. I've also noticed when I look at my MultiSelectList object in the viewmodel, it has the correct items in the selected items property, but if I expand the results view, it doesn't have any listboxitem with the selected value. What am I missing here? I feel like I'm taking crazy pills! Thanks in advance. model: public class Article { public int ArticleID { get; set; } public DateTime? DatePurchased { get; set; } public DateTime? LastWorn { get; set; } public string ThumbnailImg { get; set; } public string LargeImg { get; set; } public virtual List<Outfit> Outfits { get; set; } public virtual List<Tag> Tags { get; set; } } viewmodel: public class ArticleViewModel { public int ArticleID { get; set; } public List<Tag> Tags { get; set; } public MultiSelectList mslTags { get; set; } public virtual Article Article { get; set; } public ArticleViewModel(int ArticleID) { using (ctContext db = new ctContext()) { this.Article = db.Articles.Find(ArticleID); this.Tags = db.Tags.ToList(); this.mslTags = new MultiSelectList(this.Tags, "TagID", "Name", this.Article.Tags); } } } controller: public ActionResult Index() { ArticleIndexViewModel vm = new ArticleIndexViewModel(db); return View(vm); } view: @model ClosetTracker.ArticleViewModel @using (Html.BeginForm()) { <img id="bigImg" src="@Model.Article.ThumbnailImg" alt="img" /> @Html.HiddenFor(m => m.ArticleID); @Html.LabelFor(m => m.Article.Tags) @* @Html.ListBoxFor(m => m.Article.Tags, Model.Tags.Select(t => new SelectListItem { Text = t.Name, Value = t.TagID.ToString() }), new { Multiple = "multiple" }) *@ @Html.ListBoxFor(m => m.Article.Tags, Model.mslTags); @Html.LabelFor(m => m.Article.LastWorn) @Html.TextBoxFor(m => m.Article.LastWorn, new { @class = "datepicker" }) @Html.LabelFor(m => m.Article.DatePurchased) @Html.TextBoxFor(m => m.Article.DatePurchased, new { @class = "datepicker" }) <p> <input type="submit" value="Save" /> </p> } EDITED Ok, I changed around the constructor of the MultiSelectList to have a list of TagID in the selected value arg instead of a list of Tag objects. This shows the correct tags as selected in the results view when I watch the mslTags object in debug mode. However, it still isn't rendering correctly to the page. public class ArticleViewModel { public int ArticleID { get; set; } public List<Tag> Tags { get; set; } public MultiSelectList mslTags { get; set; } public virtual Article Article { get; set; } public ArticleViewModel(int ArticleID) { using (ctContext db = new ctContext()) { this.Article = db.Articles.Find(ArticleID); this.Tags = db.Tags.ToList(); this.mslTags = new MultiSelectList(this.Tags, "TagID", "Name", this.Article.Tags.Select(t => t.TagID).ToList()); } } }

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Python 2.7 list of lists manipulation functionality

    - by user3688163
    I am trying to perform several operations on the myList list of lists below and am having some trouble figuring it out. I am very new to Python. myList = [ ['Issue Id','1.Completeness for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','2.Validity check1 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','3.Validity check2 for OTC','Break',3308,0,34021487105,0,20140107], ['Issue Id','4.Completeness for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Issue Id','5.Validity check1 for RST','Break',73377,0,8.24976E+11,0,20140107], ['Liquidity','1. OTC - Null','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','2. OTC - Unmapped','Break',7778,43,2.27712E+11,579021732.8,20140110], ['Liquidity','3. RST - Null','Break',335120,0,1.01425E+12,0,20140110], ['Liquidity','4. RST - Unmapped','Break',334608,512,1.01351E+12,735465433.1,20140110], ['Liquidity','5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','2.Validity check1 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','3.Validity check2 for OTC','Break',3325,0,32704128379,0,20140110], ['Issue Id','4.Completeness for RST','Break',73594,3,8.5352E+11,69614602,20140110], ['Issue Id','5.Validity check1 for RST','Break',73597,0,8.5359E+11,0,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] I am trying to get the result below but do not know how to do so: newList = [ ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3275,33,33725102303,296384802,20140107], ['Issue Id','4.Completeness for RST:5.Validity check1 for RST','Break',73376,1,8.24931E+11,44690130,20140107], ['Liquidity','1. OTC - Null','Break:2. OTC - Unmapped','Break',7821 0,2.28291E+11,0,20140110], ['Liquidity','3. RST - Null:4. RST - Unmapped:5. RST - Valid','Break',335120,0,1.01425E+12,0,20140110], ['Issue Id','1.Completeness for OTC:2.Validity check1 for OTC:3.Validity check2 for OTC','Break',3292,33,32397924450,306203929,20140110], ['Issue Id','4.Completeness for RST:5. RST - Valid','Break',73594,3,8.5352E+11,69614602,20140110], ['Unlinked Silver ID','DQ','Break',3201318,176,20000000,54974.33386,20140101], ['Missing GCI','DQ','Break',3201336,158,68000000,49351.9588,20140101], ['Missing Book','DQ Break',3192720,8774,3001000000,2740595.484,20140101], ['Matured Trades','DQ','Break',3201006,488,1371000000,152428.8348,20140101], ['Illiquid Trades','1.Completeness Check for range','Break',43122,47,88597695671,54399061.43,20140107], ['Illiquid Trades','2.Completeness Check for non','Break',39033,0,79133622401,0,20140107] ] Rules to create newList. Create a new list in the newList list of lists if the values in the lists meet the following conditions: multiple lists that match on `myList[i][0]` and `myList[i][7]` but with have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from each other are just listed as is in newList if multiple lists match on both `myList[i][0]` (this is the type) and `myList[i][7]` (this is the date) are the same then create a new list for each set of lists with mathcing `myList[i][0]` and `myList[i][7]` that have (1) sums of `myList[i][3]` and `myList[i][4]` and (2) sums of `myList[i][5]` and `myList[i][6]` that are different from the other lists with mathcing `myList[i][0]` and `myList[i][7]`. I also am trying to concatenate `myList[i][1]` separated by a ':' for all those lists with matching `myList[i][0]` and `myList[i][7]` and with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` that match. So essentially for this case only those lists in `myList` with sums of `myList[i][3]` + `myList[i][4]` and `myList[i][5]` + `myList[i][6]` are different from the other lists are then listed in newList. The above newList illustrates these results I am trying to achieve. If anyone has any ideas how to do this they would be greatly appreciated. Thank you!

    Read the article

  • Determine if the current thread has low I/O priority

    - by Magnus Hoff
    I have a background thread that does some I/O-intensive background type work. To please the other threads and processes running, I set the thread priority to "background mode" using SetThreadPriority, like this: SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); However, THREAD_MODE_BACKGROUND_BEGIN is only available in Windows Server 2008 or newer, as well as Windows Vista and newer, but the program needs to work well on Windows Server 2003 and XP as well. So the real code is more like this: if (!SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN)) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST); } The problem with this is that on Windows XP it will totally disrupt the system by using too much I/O. I have a plan for a ugly and shameful way of mitigating this problem, but that depends on me being able to determine if the current thread has low I/O priority or not. Now, I know I can store which thread priority I ended up setting, but the control flow in the program is not really well suited for this. I would rather like to be able to test later whether or not the current thread has low I/O priority -- if it is in "background mode". GetThreadPriority does not seem to give me this information. Is there any way to determine if the current thread has low I/O priority?

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • MUD (game) design concept question about timed events.

    - by mudder
    I'm trying my hand at building a MUD (multiplayer interactive-fiction game) I'm in the design/conceptualizing phase and I've run into a problem that I can't come up with a solution for. I'm hoping some more experienced programmers will have some advice. Here's the problem as best I can explain it. When the player decides to perform an action he sends a command to the server. the server then processes the command, determines whether or not the action can be performed, and either does it or responds with a reason as to why it could not be done. One reason that an action might fail is that the player is busy doing something else. For instance, if a player is mid-fight and has just swung a massive broadsword, it might take 3 seconds before he can repeat this action. If the player attempts to swing again to soon, the game will respond indicating that he must wait x seconds before doing that. Now, this I can probably design without much trouble. The problem I'm having is how I can replicate this behavior from AI creatures. All of the events that are being performed by the server ON ITS OWN, aka not as an immediate reaction to something a player has done, will have to be time sensitive. Some evil monster has cast a spell on you but must wait 30 seconds before doing it again... I think I'll probably be adding all these events to some kind of event queue, but how can I make that event queue time sensitive?

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • opening a new page and passing a parameter to it with Javascript

    - by recipriversexclusion
    I have a JS function that processes XML I GET from a server to dynamically create a table as below // Generate table of services by parsing the returned XML service list. function convertServiceXmlDataToTable(xml) { var tbodyElem = document.getElementById("myTbody"); var trElem, tdElem; var j = 0; $(xml).find("Service").each(function() { trElem = tbodyElem.insertRow(tbodyElem.rows.length); trElem.className = "tr" + (j % 2); // first column -> service ID var serviceID = $(this).attr("service__id"); tdElem = trElem.insertCell(trElem.cells.length); tdElem.className = "col0"; tdElem.innerHTML = serviceID; // second column -> service name tdElem = trElem.insertCell(trElem.cells.length); tdElem.className = "col1"; tdElem.innerHTML = "<a href=javascript:showServiceInfo(" + serviceID + ")>" + $(this).find("name").text() + "</a>"; j++; }); // each service } where showServiceInfo retrieves more detailed information about each service from the server based on the serviceID and displays it in the same page as the services table. So far so good. But, it turns out that I have to display the detailed service information in another page rather than the same page as the table. I have creates a generic service_info.html with an empty layout template, but I don't understand how to pass serviceID to this new page so that it gets customized with the detailed service info. How can I do this? Or, is there a better way to handle these situations? Thanks!

    Read the article

  • Unable to get data from a WCF client

    - by Scott
    I am developing a DLL that will provide sychronized time stamps to multiple applications running on the same machine. The timestamps are altered in a thread that uses a high performance timer and a scalar to provide the appearance of moving faster than real-time. For obvious reasons I want only 1 instance of this time library, and I thought I could use WCF for the other processes to connect to this and poll for timestamps whenever they want. When I connect however I never get a valid time stamp, just an empty DateTime. I should point out that the library does work. The original implementation was a single DLL that each application incorporated and each one was synced using windows messages. I'm fairly sure it has something to do with how I'm setting up the WCF stuff, to which I am still pretty new. Here are the contract definitions: public interface ITimerCallbacks { [OperationContract(IsOneWay = true)] void TimerElapsed(String id); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(ITimerCallbacks))] public interface ISimTime { [OperationContract] DateTime GetTime(); } Here is my class definition: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class SimTimeServer: ISimTime The host setup: // set up WCF interprocess comms host = new ServiceHost(typeof(SimTimeServer), new Uri[] { new Uri("net.pipe://localhost") }); host.AddServiceEndpoint(typeof(ISimTime), new NetNamedPipeBinding(), "SimTime"); host.Open(); and the implementation of the interface function server-side: public DateTime GetTime() { if (ThreadMutex.WaitOne(20)) { RetTime = CurrentTime; ThreadMutex.ReleaseMutex(); } return RetTime; } Lastly the client-side implementation: Callbacks myCallbacks = new Callbacks(); DuplexChannelFactory pipeFactory = new DuplexChannelFactory(myCallbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimTime")); ISimTime pipeProxy = pipeFactory.CreateChannel(); while (true) { string str = Console.ReadLine(); if (str.ToLower().Contains("get")) Console.WriteLine(pipeProxy.GetTime().ToString()); else if (str.ToLower().Contains("exit")) break; }

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >