Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 164/195 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How do I merge multiple PDB files ?

    - by blue.tuxedo
    We are currently using a single command line tool to build our product on both Windows and Linux. Si far its works nicely, allowing us to build out of source and with finer dependencies than what any of our previous build system allowed. This buys us great incremental and parallel build capabilities. To describe shortly the build process, we get the usual: .cpp -- cl.exe --> .obj and .pdb multiple .obj and .pdb -- cl.exe --> single .dll .lib .pdb multiple .obj and .pdb -- cl.exe --> single .exe .pdb The msvc C/C++ compiler supports it adequately. Recently the need to build a few static libraries emerged. From what we gathered, the process to build a static library is: multiple .cpp -- cl.exe --> multiple .obj and a single .pdb multiple .obj -- lib.exe --> a single .lib The single .pdb means that cl.exe should only be executed once for all the .cpp sources. This single execution means that we can't parallelize the build for this static library. This is really unfortunate. We investigated a bit further and according to the documentation (and the available command line options): cl.exe does not know how to build static libraries lib.exe does not know how to build .pdb files Does anybody know a way to merge multiple PDB files ? Are we doomed to have slow builds for static libraries ? How do tools like Incredibuild work around this issue ?

    Read the article

  • How to prevent external translation of a movieclip object on stage in AS3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • Clean up domain list in Excel - regex / macros?

    - by Tim
    I have a huge spreadsheet of domains that I need to clean up as follows: Remove all http:// (simple replace all - "http://" with "") Remove any www. (simple replace all - "www." with "") Delete any sub-domains (delete the actual row completely, not just the subdomain from the url) Remove anything after the domain extension (i.e. website.com/blah/blahbah/ becomes just website.com (simple replace all - "/*" with "", then replace all "/" with "") So what I'm left with is just a spreadsheet of clean domains like "website.com". I think I've got 1, 2 and 4 sorted (as above), but I'm really struggling with 3. Any ideas? Can I do this with regexp / vba, and actually delete the row completely? Sample data: http://www.scholastic.com/kids/stacks/games/ http://imgworld.teamworkonline.com/ http://topfreegraphics.com/ http://www.workcircle.co.uk/ http://www.healthycanadians.gc.ca/index-eng.php http://gsociology.icaap.org/methods/soft.html Post 1, 2 and 4 would leave me with: scholastic.com imgworld.teamworkonline.com topfreegraphics.com workcircle.co.uk healthycanadians.gc.ca gsociology.icaap.org It's those pesky sub-domains I need to just delete completely, just delete the row. I've realised I can't just search for 2 x ".", because obviously plenty of domain extensions (i.e .co.uk) include that. Any help appreciated.

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

  • PowerShell PSCX Read-Archive: Cannot bind parameter... problem

    - by Robert
    I'm running across a problem I can't seem to wrap my head around using the Read-Archive cmdlet available via PowerShell Community Extensions (v2.0.3782.38614). Here is a cut down sample used to exhibit the problem I'm running into: $mainPath = "p:\temp" $dest = Join-Path $mainPath "ps\CenCodes.zip" Read-Archive -Path $dest -Format zip Running the above produces the following error: Read-Archive : Cannot bind parameter 'Path'. Cannot convert the "p:\temp\ps\CenCodes.zip" value of type "System.String" to type "Pscx.IO.PscxPathInfo". At line:3 char:19 + Read-Archive -Path <<<< $dest -Format zip + CategoryInfo : InvalidArgument: (:) [Read-Archive], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Pscx.Commands.IO.Compression.ReadArchiveCommand If I do not use Join-Path to build the path passed to Read-Archive it works, as in this example: $mainPath = "p:\temp" $path = $mainPath + "\ps\CenCodes.zip" Read-Archive -Path $path -Format zip Output from above: ZIP Folder: CenCodes.zip#\ Index LastWriteTime Size Ratio Name ----- ------------- ---- ----- ---- 0 6/17/2010 2:03 AM 3009106 24.53 % CenCodes.xls Even more confusing is if I compare the two variables passed as the Path argument in the two Read-Archive samples above, they seem identical: This... Write-Host "dest=$dest" Write-Host "path=$path" Write-Host ("path -eq dest is " + ($dest -eq $path).ToString()) Outputs... dest=p:\temp\ps\CenCodes.zip path=p:\temp\ps\CenCodes.zip path -eq dest is True Anyone have any ideas as to why the first sample gripes but the second one works fine?

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • Start-job to call script from main

    - by Naveen
    I have three script , from main - 1-script , I am calling other two scripts. so that I can execute both scripts parallely because it's taking too much time in sequential order. Only variables are different in the script. How can I merge script 2 & 3 in a single script so that I can call from the main script and it will run as parallel. 1 CompareCtrlM... Completed False localhost ######################... 3 CompareCtrlM... Completed True localhost ######################... Main -1 Script Start-Job -Name "LoopComparectrlMasterModel" -filepath D:\tmp\naveen\Script\CompareCtrlMasterCtrlModel.ps1 Start-Job -Name "LoopCompareProdMasterModel" -filepath D:\idv\CA\rcm_data\tmp\work\CompareCtrlMasterProdModel.ps1 Wait-Job -Name "LoopComparectrlMasterModel" Receive-Job "LoopComparectrlMasterModel" Wait-Job -Name "LoopCompareProdMasterModel" Receive-Job "LoopCompareProdMasterModel" =============================================== Script 2- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterProdModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Prod Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterProdModel[$i-1] $cfgProdModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterProdModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } ========================================================================== Script 3- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterCtrlModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Ctrl Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterCtrlModel[$i-1] $cfgCtrlModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterCtrlModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } write-output $results Thanks a lot for help Regards Naven

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • Are finalizers ever allowed to call other managed classes' methods?

    - by romkyns
    I used to be pretty sure the answer is "no", as explained in Overriding the Finalize method and Object.Finalize documentation. However, while randomly browsing through FileStream in Reflector, I found that it can actually call just such a method from a finalizer: private SafeFileHandle _handle; ~FileStream() { if (this._handle != null) { this.Dispose(false); } } protected override void Dispose(bool disposing) { try { ... } finally { if ((this._handle != null) && !this._handle.IsClosed) // <=== HERE { this._handle.Dispose(); // <=== AND HERE } [...] } } I started wondering whether this will always work due to the exact way in which it's written, and hence whether the "do not touch managed classes from finalizers" is just a guideline that can be broken given a good reason and the necessary knowledge to do it right. I dug a bit deeper and found out that the worst that can happen when the "rule" is broken is that the managed object being accessed had already been finalized, or may be getting finalized in parallel on a separate thread. So if the SafeFileHandle's finalizer didn't do anything that would cause a subsequent call to Dispose fail then the above should be fine... right? Question: so there might after all be situations in which a method on another managed class may be called reliably from a finalizer? I've always believed this to be false, but this code suggests that it's possible and that there can be good enough reasons to do it. Bonus: Observe that the SafeFileHandle will not even know it's being called from a finalizer, since this is just a normal call to Dispose(). The base class, SafeHandle, actually has two private methods, InternalDispose and InternalFinalize, and in this case InternalDispose will be called. Isn't this a problem? Why not?...

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • javascript equivalent of ASP preInit event

    - by Pankaj Kumar
    Hi guys, searching for this has yielded no resuts.. i have a middle page that has an iframe and all the pages open in that iframe...i have to implement a chat system just like in Google and so to ensure that the chat windows did not close whwn the page would postback i am opening all pages in iframe.... this is a social networking site and user can have themes which have background images per theme now when i visit a friends profile the backgrounf image of the middle page should change... i have this code to find parent of iframe and change background image $(document).ready(function(){ var theme=document.getElementById("ctl00_decideFooterH").value; var t= parent.document.getElementsByTagName("body"); if(theme=='basicTheme'){ t[0].className="basic"; } else if(theme=='Tranquility') { t[0].className="Tranquility"; } else if(theme=='AbstractPink') { t[0].className="abstractPink"; } }); and this code is the master page that all child pages that would open in iframe would access..the trouble is that there is a lag between the theme application on server side(which happens at PreInit) and the background image change which is done by this document.ready that gets executed after the iframe has loaded.. so essentially i need a javascript function that would either execute parallel to PreInit or some other logic....so guys plz help its urgent

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Cann Boost Program_options separate comma separated argument values

    - by lrm
    If my command line is: > prog --mylist=a,b,c Can Boost's program_options be setup to see three distinct argument values for the mylist argument? I have configured program_options as: namespace po = boost::program_options; po::options_description opts("blah") opts.add_options() ("mylist", std::vector<std::string>>()->multitoken, "description"); po::variables_map vm; po::store(po::parse_command_line(argc, argv, opts), vm); po::notify(vm); When I check the value of the mylist argument, I see one value as a,b,c. I'd like to see three distinct values, split on comma. This works fine if I specify the command line as: > prog --mylist=a b c or > prog --mylist=a --mylist=b --mylist=c Is there a way to configure program_options so that it sees a,b,c as three values that should each be inserted into the vector, rather than one? I am using boost 1.41, g++ 4.5.0 20100520, and have enabled c++0x experimental extensions.

    Read the article

  • Internet explorer 8 opens file in browser instead of the client

    - by Rogier
    Our company is working with a great Business Intelligence tool CorVu 4.2 to analyse the operational and strategic data. Since several years we are successfully working with Sharepoint 2007 to collaborate and share information with colleagues. Most of my colleagues are working with Internet Explorer 7, but step by step Internet Explorer 8 is implemented in the company. We share a lot of CorVu files thought Sharepoint, but since we are using Internet Explorer 8, we have a problem that is new for us. If we click on a CorVu file in Internet Explorer 8 (not necessarily in Sharepoint) a pop-up shows how to open the file, if we save the file, there is no problem. But if we open the file, the file is shown in the browser and not in the CorVu client! See the screenshot below: link (I removed some unnecessary information) So far my colleagues accept this 'feature' in Internet Explorer 8. But I we open and closes more CorVu files, multiple errors (more than 10) show up starting with: (unable to place more hyperlinks) By pressing Enter the errors disappear, but it's not professional! I contacted the creators of CorVu, but they don't have a solution for in their client. There may be a solution in Internet Explorer 8? The extensions of a CorVu file can be a .sqy, .tab or .qrp. But is it possible to force the files to open in the standard client instead of the browser?

    Read the article

  • How can I tell whether a webpart that has been deployed to a site is a native webpart that ships wit

    - by program247365
    I have a SharePoint 2007 MOSS instance, and I'm on a fact-finding mission. There have been multiple developers, developing multiple webparts and deploying them (using VS2005/2008 SharePoint Extensions). I thought maybe I could look at the fields in the "Web Part Gallery" list in my site, and look by "Modified by", but it looks like a developer's name is on some of the out-of-the-box webparts somehow, and on ones I know are custom developed, they say "System Account" - so looking at that field in this list is a no go. I thought then maybe I could look at the "Group" to which each webpart was assigned but it looks like they were arbitrarily assigned to many different groups inconsistently - so using that piece of information is a no go. Here is my code I have for just looping through and getting the names of all the webparts. Is there any property I can access on the list items of webparts that would tell me whether it's a custom developed webpart? Any way to distinguish the custom webparts from the out-of-the-box ones? Is there another way to do this? #region Misc Site Collection Methods public static List<string> GetAllWebParts(string connectedSPInstanceUrl) { List<string> lstWebParts = new List<string>(); try { using (SPSite site = new SPSite(connectedSPInstanceUrl)) { using (SPWeb web = site.OpenWeb()) { SPList list = web.Lists["Web Part Gallery"]; foreach (SPListItem item in list.Items) { lstWebParts.Add(item.Name); } } } } catch (Exception ex) { lstWebParts.Add("Error"); lstWebParts.Add("Message: " + ex.Message); lstWebParts.Add("Inner Exception: " + ex.InnerException.ToString()); lstWebParts.Add("Stack Trace: " + ex.StackTrace); } return lstWebParts; } #endregion

    Read the article

  • How to retrieve a numbered sequence range from a List of filenames?

    - by glenneroo
    I would like to automatically parse the entire numbered sequence range of a List<FileData> of filenames (sans extensions) by checking which part of the filename changes. Here is an example (file extension already removed): First filename: IMG_0000 Last filename: IMG_1000 Numbered Range I need: 0000 to 1000 Except I need to deal with every possible type of file naming convention such as: 0000 ... 9999 20080312_0000 ... 20080312_9999 IMG_0000 - Copy ... IMG_9999 - Copy 8er_green3_00001 .. 8er_green3_09999 etc. I need the entire 0-padded range e.g. 0001 not just 1 The sequence number is 0-padded e.g. 0001 The sequence number can be located anywhere e.g. IMG_0000 - Copy The range can start and end with anything i.e. doesn't have to start with 1 and end with 9999 Whenever I get something working for 8 random test cases, the 9th test breaks everything and I end up re-starting from scratch. I've currently been comparing only the first and last filenames (as opposed to iterating through all filenames): void FindRange(List<FileData> files, out string startRange, out string endRange) { string firstFile = files.First().ShortName; string lastFile = files.Last().ShortName; ... } Does anyone have any clever ideas?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >