Search Results

Search found 3521 results on 141 pages for 'parallel computing'.

Page 124/141 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to prevent external translation of a movieclip object on stage in AS3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • How do I merge multiple PDB files ?

    - by blue.tuxedo
    We are currently using a single command line tool to build our product on both Windows and Linux. Si far its works nicely, allowing us to build out of source and with finer dependencies than what any of our previous build system allowed. This buys us great incremental and parallel build capabilities. To describe shortly the build process, we get the usual: .cpp -- cl.exe --> .obj and .pdb multiple .obj and .pdb -- cl.exe --> single .dll .lib .pdb multiple .obj and .pdb -- cl.exe --> single .exe .pdb The msvc C/C++ compiler supports it adequately. Recently the need to build a few static libraries emerged. From what we gathered, the process to build a static library is: multiple .cpp -- cl.exe --> multiple .obj and a single .pdb multiple .obj -- lib.exe --> a single .lib The single .pdb means that cl.exe should only be executed once for all the .cpp sources. This single execution means that we can't parallelize the build for this static library. This is really unfortunate. We investigated a bit further and according to the documentation (and the available command line options): cl.exe does not know how to build static libraries lib.exe does not know how to build .pdb files Does anybody know a way to merge multiple PDB files ? Are we doomed to have slow builds for static libraries ? How do tools like Incredibuild work around this issue ?

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • how would you like computer science classes to be taught?

    - by aaa
    hello I am a graduate student now, and hopefully someday I will teach. my interests are C++, Python, embedded languages, and scientific computing. Meanwhile I daydream about how I would teach. I was not quite happy with my undergraduate university as I found many computer science classes lacking. so I would like to ask you, if you were a student, how would you like your computer science classes to be taught? I understand it is a very subjective question, but nevertheless I think it's important to know what people want. Some specific points I am interested in: should computer languages be taught explicitly, or should students be required to pick up language on their own? what is better for learning, tests, projects, some sort of take-home exam? how do you think classtime should be used? theory, introduction, explanations, etc.? do you think the group projects are important? how much about computer architecture do you want to learn in computer science class, not necessarily assembler class. should particular operating system/editor be mandated or encouraged? Thanks thank you for your comments. Question has been closed because it is a discussion question rather than Q&A. If you know appropriate website for discussions of such sort with low noise ratio, please let me know.

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Start-job to call script from main

    - by Naveen
    I have three script , from main - 1-script , I am calling other two scripts. so that I can execute both scripts parallely because it's taking too much time in sequential order. Only variables are different in the script. How can I merge script 2 & 3 in a single script so that I can call from the main script and it will run as parallel. 1 CompareCtrlM... Completed False localhost ######################... 3 CompareCtrlM... Completed True localhost ######################... Main -1 Script Start-Job -Name "LoopComparectrlMasterModel" -filepath D:\tmp\naveen\Script\CompareCtrlMasterCtrlModel.ps1 Start-Job -Name "LoopCompareProdMasterModel" -filepath D:\idv\CA\rcm_data\tmp\work\CompareCtrlMasterProdModel.ps1 Wait-Job -Name "LoopComparectrlMasterModel" Receive-Job "LoopComparectrlMasterModel" Wait-Job -Name "LoopCompareProdMasterModel" Receive-Job "LoopCompareProdMasterModel" =============================================== Script 2- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterProdModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Prod Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterProdModel[$i-1] $cfgProdModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterProdModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } ========================================================================== Script 3- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterCtrlModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Ctrl Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterCtrlModel[$i-1] $cfgCtrlModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterCtrlModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } write-output $results Thanks a lot for help Regards Naven

    Read the article

  • Are finalizers ever allowed to call other managed classes' methods?

    - by romkyns
    I used to be pretty sure the answer is "no", as explained in Overriding the Finalize method and Object.Finalize documentation. However, while randomly browsing through FileStream in Reflector, I found that it can actually call just such a method from a finalizer: private SafeFileHandle _handle; ~FileStream() { if (this._handle != null) { this.Dispose(false); } } protected override void Dispose(bool disposing) { try { ... } finally { if ((this._handle != null) && !this._handle.IsClosed) // <=== HERE { this._handle.Dispose(); // <=== AND HERE } [...] } } I started wondering whether this will always work due to the exact way in which it's written, and hence whether the "do not touch managed classes from finalizers" is just a guideline that can be broken given a good reason and the necessary knowledge to do it right. I dug a bit deeper and found out that the worst that can happen when the "rule" is broken is that the managed object being accessed had already been finalized, or may be getting finalized in parallel on a separate thread. So if the SafeFileHandle's finalizer didn't do anything that would cause a subsequent call to Dispose fail then the above should be fine... right? Question: so there might after all be situations in which a method on another managed class may be called reliably from a finalizer? I've always believed this to be false, but this code suggests that it's possible and that there can be good enough reasons to do it. Bonus: Observe that the SafeFileHandle will not even know it's being called from a finalizer, since this is just a normal call to Dispose(). The base class, SafeHandle, actually has two private methods, InternalDispose and InternalFinalize, and in this case InternalDispose will be called. Isn't this a problem? Why not?...

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • javascript equivalent of ASP preInit event

    - by Pankaj Kumar
    Hi guys, searching for this has yielded no resuts.. i have a middle page that has an iframe and all the pages open in that iframe...i have to implement a chat system just like in Google and so to ensure that the chat windows did not close whwn the page would postback i am opening all pages in iframe.... this is a social networking site and user can have themes which have background images per theme now when i visit a friends profile the backgrounf image of the middle page should change... i have this code to find parent of iframe and change background image $(document).ready(function(){ var theme=document.getElementById("ctl00_decideFooterH").value; var t= parent.document.getElementsByTagName("body"); if(theme=='basicTheme'){ t[0].className="basic"; } else if(theme=='Tranquility') { t[0].className="Tranquility"; } else if(theme=='AbstractPink') { t[0].className="abstractPink"; } }); and this code is the master page that all child pages that would open in iframe would access..the trouble is that there is a lag between the theme application on server side(which happens at PreInit) and the background image change which is done by this document.ready that gets executed after the iframe has loaded.. so essentially i need a javascript function that would either execute parallel to PreInit or some other logic....so guys plz help its urgent

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • Building Paypal based membership website - total noob - would appreciate help

    - by Ali
    this is a follow up on my question on paypal integration. I'm working ona membership site for racing fans. My membership site has 3 membership levels - free, gold and premium. When a user signs up he/she can gets a free membership on the spot but has the option to upgrade to a gold membership for 4 Dollars a month or a premium membership for 10 Dollars a month. I've gone through the paypal integration guide a few times though and have a vague understanding of how to get this to work. I think the recurring payments option would be fine enough - however I don't know how do I implement this in my system. Like when a user decides to go for a paid account i.e. Gold or premium from basic - what should I do on both my code side and on the paypal account side - I'd really appreciate if anyone would outline what I'd have to do here. Plus when a user decides to upgrade from lets say a Gold to a premium account - there is the issue of computing how much should be charged to upgrade his/her account eg: a user has been billed for 4 dollars and the next day opts to go for a premium account so assuming that the surplus for the rest of the month is 5 dollars and further from that all payments would be recurring 10 dollars monthly - how do I implement this? And in case a user decides to downgrade from a premium account of 10 dollars a month to a gold account of 4 dollars a month - how do I handle the surplus which would have to be refunded for that month alone and changing the membership? And like wise if someone wishes to cancel membership and go to having a free account - how do I refund whatever is owed and cancel the subscription. I'm sorry if it sounds like I'm asking to be spoon fed :( I'm quite new to this and this is for a client and I would really appreciate all the help here and really have to get this working right. Thanks again everyone - waiting for all your replies.

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • How does static code run with multiple threads?

    - by Krisc
    I was reading http://stackoverflow.com/questions/1511798/threading-from-within-a-class-with-static-and-non-static-methods and I am in a similar situation. I have a static method that pulls data from a resource and creates some runtime objects based on the data. static class Worker{ public static MyObject DoWork(string filename){ MyObject mo = new MyObject(); // ... does some work return mo; } } The method takes awhile (in this case it is reading 5-10mb files) and returns an object. I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code? Let's say I have something like this... class ThreadedWorker { public void Run() { Thread t = new Thread(OnRun); t.Start(); } void OnRun() { MyObject mo = Worker.DoWork("somefilename"); mo.WriteToConsole(); } } Does the static method run for each thread, allowing for parallel execution?

    Read the article

  • Trigger ad-hoc activity within a workflow

    - by Chris Taylor
    I am looking to use WF 4 to replace an existing workflow solution we have. One feature that is currently used in the existing workflow engine is the ability to cancel a current activity and loopback to a FlowSwitch type activity. So given the following crude workflow where we start at 'O' and base in the input data the workflow follows the path to 'A2' which is currently blocking on s bookmark waiting for input. ---------A1--\ | \ /\ \ O------- ---->--(A2)-------| ^ \/ / | | | / | | ---------A3--/ | | | |----------------------| However in the meantime some out of band data comes in that means we should cancel 'A2' and return to the FlowSwitch to re-evaluate based on the new data. The question is what is the best way to handle the out of band data that arrived? My initial guess is to have a Parallel activity with one branch waiting for out of band data and the other branch containing the workflow sequence described above. If data came in on the brach waiting for the out of band data, how would I cancel the current activity in the workflow and force it to return to the FlowSwitch. Or of course is there a better way to handle this. I have not actually done any work with the WF4 stuff for WF3 for that matter so I might be missing something obvious here.

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Natural problems to solve using closures

    - by m.u.sheikh
    I have read quite a few articles on closures, and, embarassingly enough, I still don't understand this concept! Articles explain how to create a closure with a few examples, but I don't see any point in paying much attention to them, as they largely look contrived examples. I am not saying all of them are contrived, just that the ones I found looked contrived, and I dint see how even after understanding them, I will be able to use them. So in order to understand closures, I am looking at a few real problems, that can be solved very naturally using closures. For instance, a natural way to explain recursion to a person could be to explain the computation of n!. It is very natural to understand a problem like computing the factorial of a number using recursion. Similarly, it is almost a no-brainer to find an element in an unsorted array by reading each element, and comparing with the number in question. Also, at a different level, doing Object-oriented programming also makes sense. So I am trying to find a number of problems that could be solved with or without closures, but using closures makes thinking about them and solving them easier. Also, there are two types to closures, where each call to a closure can create a copy of the environment variables, or reference the same variables. So what sort of problems can be solved more naturally in which of the closure implementations?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >