Search Results

Search found 6772 results on 271 pages for 'exit strategy'.

Page 21/271 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • Powershell profile "on exit" event?

    - by poke
    I'm looking for a way to automatically do some clean up tasks when the PowerShell session quits. So for example in my profile file I start a process which needs to run in the background for quite a lot of tasks and I would like to automatically close that process when I close the console. Is there some function the PowerShell automatically calls when closing the session as it does with prompt when displaying the prompt?

    Read the article

  • HFT strategy coding on hardware

    - by bsobaid
    Hardware accelaration and embedded programming has mostly been used so far to parse datafeed and/or to route orders to exchange. Have there been attempts to write simpler HFT strategies such as equity market-making in hardware? Have they been successful? Which companies are doing this and what kind of programming model is used?

    Read the article

  • Strategy for developing namespaced and non-namespaced versions of same PHP code

    - by porneL
    I'm maintaining library written for PHP 5.2 and I'd like to create PHP 5.3-namespaced version of it. However, I'd also keep non-namespaced version up to date until PHP 5.3 becomes so old, that even Debian stable ships it ;) I've got rather clean code, about 80 classes following Project_Directory_Filename naming scheme (I'd change them to \Project\Directory\Filename of course) and only few functions and constants (also prefixed with project name). Question is: what's the best way to develop namespaced and non-namespaced versions in parallel? Should I just create fork in repository and keep merging changes between branches? Are there cases where backslash-sprinkled code becomes hard to merge? Should I write script that converts 5.2 version to 5.3 or vice-versa? Should I use PHP tokenizer? sed? C preprocessor? Is there a better way to use namespaces where available and keep backwards compatibility with older PHP?

    Read the article

  • exec() in BeanShell macro causes jEdit to hang when it returns non-zero exit code

    - by rossmeissl
    I have a jEdit BeanShell macro that runs my Markdown files through Maruku when I save them: if (buffer.getMode().toString().equals("markdown")) { cmd = "C:\\Ruby\\bin\\maruku.bat -o " + buffer.getDirectory() + buffer.getName().replaceAll("markdown$", "html") + " " + buffer.getPath(); exec(cmd); } This works great when the Markdown file is valid. But if I've made a mistake, jEdit just waits around forever for the exec() call to "succeed," which it never will. When this happens, I have to kill jEdit's javaw.exe process and run Maruku manually from the command line to discover the error, e.g.: E:\bp\plan\supply_chain>maruku business_plan.markdown ___________________________________________________________________________ | Maruku tells you: +--------------------------------------------------------------------------- | Could not find ref_id = "17" for md_link(["17"],"17") | Available refs are [] +--------------------------------------------------------------------------- !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/errors_management.rb:49:in `maruku_error' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:716:in `to_html_link' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:970:in `send' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:970:in `array_to_html' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:961:in `each' \___________________________________________________________________________ Not creating a link for ref_id = "17". Then I restart jEdit, fix the error, and re-save the file, at which point the macro succeeds. How can I make my macro more resilient to either die helpfully (display Maruku's error output) or, at the very least, die silently so I don't have to kill jEdit?

    Read the article

  • Partial Git deployment strategy?

    - by MatW
    I need to setup a Kohana dev environment that allows me to make full use of shared module / system classes across separate applications. Each application typically belonging to a different client. I use Git for source control, but am struggling to come up with a clean deployment method that will allow me to pull only those parts of the dev environment specific to a client / app down into that client's production environment (assuming that the client's production environment will have Git installed). Dev enviroment: - kohana - applications - clientapp1 - clientapp2 - modules - public_html - clientapp1 - clientapp2 - system - 3.0.1 - 3.0.5 Client 1's production environment: - / - applications - clientapp1 - modules - public_html - client_app1 - system - 3.0.5 Naturally, I want to have total control over each client "sub repo" as if it were an independent repo (in terms of gitignore, etc). I have seen topics that cover Git's sparse checkout feature, but it seems like it may cause a few problems down the line from a maintenance point of view, and I don't like the idea of the entire repo's metadata existing in client's production environment repo. As you can probably tell, I'm not exactly a Git poweruser, so any suggestions / wisdom are very welcome!

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • Strategy in storing ad-hoc numbers/constants?

    - by Jiho Han
    I have a need to store a number of ad-hoc figures and constants for calculation. These numbers change periodically but they are different type of values. One might be a balance, a money amount, another might be an interest rate, and yet another might be a ratio of some kind. These numbers are then used in a calculation that involve other more structured figures. I'm not certain what the best way to store these in a relational DB is - that's the choice of storage for the app. One way, I've done before, is to create a very generic table that stores the values as text. I might store the data type along with it but the consumer knows what type it is so, in situations I didn't even need to store the data type. This kind of works fine but I am not very fond of the solution. Should I break down each of the numbers into specific categories and create tables that way? For example, create Rates table, and Balances table, etc.?

    Read the article

  • Elegant check for null and exit in C#

    - by aip.cd.aish
    What is an elegant way of writing this? if (lastSelection != null) { lastSelection.changeColor(); } else { MessageBox.Show("No Selection Made"); return; } changeColor() is a void function and the function that is running the above code is a void function as well.

    Read the article

  • Browser loading strategy, <head>...<body>

    - by Bin Chen
    I am inspecting some interesting behaviors of browser, I don't know it's in standard or not. If I put everythin inside <head></head>, the browser will only begin to render the page after all the resouces in head is retrieved. So I am thinking that put as little as possible things into head is one of the important website optimization techniques, is it right? My question is: If I put script/css in body or other parts of the html, how can I know that script has been loaded successfully so that I will not be calling a undefined function?

    Read the article

  • c#: RedirectStandardOutput prevents exit exception?

    - by user1743977
    All I want is to redirect standard output of a process to a file. Sounds easy, but everything I tried doesn't work: 1) putting a dos-style redirection in the list of arguments (e.g. "param1 param2 output.txt") doesn't work, 2) using RedirectStandardOutput = true works, BUT, apparently the process does not through an exception when it exists. So the handler defined via process.Exited += ... doesn't catch anything. To be clear, once I remove the 'RedirectStandardOutput = true ' statement, it DOES catch an exception. What am I doing wrong ? Thanks, Ofer

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Mercurial merge strategy per file type

    - by dls
    All: I want to use kdiff to merge all files with a certain suffix (say *.c, *.h) and I want to do two things (turn off premerge and use internal:other) for all files with another suffix (say *.mdl). The purpose of this is to allow me to employ a type of 'clobber merge' for a specific file type (ie: un-mergable files like configurations, auto-generated C, models, etc..) In my .hgrc I've tried: [merge-tools] kdiff3= clobbermerge=internal:other clobbermerge.premerge = False [merge-patterns] **.c = kdiff3 **.h = kdiff3 **.mdl = clobbermerge but it still triggers kdiff3 for all files. Thoughts? An extension of this would be to perform a 'clobber merge' on a directory - but once the syntax is clear for a file suffix, the dir should be easy.

    Read the article

  • evaluation strategy examples

    - by Boontz
    Assuming the language supports these evaluation strategies, what would be the result for call by reference, call by name, and call by value? void swap(int a; int b) { int temp; temp = a; a = b; b = temp; } int i = 3; int A[5]; A[3] = 4; swap (i, A[3]);

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • What's best Drupal deployment strategy?

    - by Horace Ho
    I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client. I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch? install Drupal download modules (CCK, Views, Date, Calendar) create the Contents ... Thanks

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • force exit from readline() function.

    - by Sasun Hambardzumyan
    I am writing program in c++ which runs GNU readline in separate thread. When main thread is exited I need to finish the thread in which readline() function is called. The readline() function is returned only when standart input came (enter pressed). Is there any way to send input to application or explicitly return from readline function? Thanks in advance.

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • Exit and rollback everything in script on error

    - by Jan W.
    Hey guys ! I'm in a bit of a pickle here. I have a TSQL script that does a lot of database structure adjustments but it's not really safe to just let it go through when something fails. to make things clear: using MS SQL 2005 it's NOT a stored procedure, just a script file (.sql) what I have is something in the following order BEGIN TRANSACTION ALTER Stuff GO CREATE New Stuff GO DROP Old Stuff GO IF @@ERROR != 0 BEGIN PRINT 'Errors Found ... Rolling back' ROLLBACK TRANSACTION RETURN END ELSE PRINT 'No Errors ... Committing changes' COMMIT TRANSACTION just to illustrate what I'm working with ... can't go into specifics now, the problem ... When I introduce an error (to test if things get rolled back), I get a statement that the ROLLBACK TRANSACTION could not find a corresponding BEGIN TRANSACTION. This leads me to believe that something when REALLY wrong and the transaction was already killed. what I also noticed is that the script didn't fully quit on error and thus DID try to execute every statement after the error occured. (I noticed this when new tables showed up when I wasn't expecting them because it should have rollbacked) any help in this department is welcome if more speficics are needed, ask! greetz

    Read the article

  • Strategy to rename the require_once function?

    - by openfrog
    Rather than just writing a new function called import() i'd like to know if there's a better solution. Otherwise require_once would be included in the scope of import() only, which is bad for any "global" variable there. My import() function would work differently than require_once, but serves the same purpose (enhanced usability).

    Read the article

  • Strategy for desugaring Haskell

    - by luqui
    I'm developing a virtual machine for purely functional programs, and I would like to be able to test and use the the wide variety of Haskell modules already available. The VM takes as input essentially terms in the untyped lambda calculus. I'm wondering what would be a good way to extract such a representation from modern Haskell modules (eg. with MPTC's, pattern guards, etc.). I did a little research and there doesn't seem to be a tool that does this already (I would be delighted to be mistaken), and that's okay. I'm looking for an approach. GHC Core seems too operationally focused, especially since one of the things the VM does is to change the evaluation order significantly. Are there any accessible intermediate representations that correspond more closely to the lambda calculus?

    Read the article

  • How to change Hibernate´s auto persistance strategy

    - by Kristofer Borgstrom
    I just noted that my hibernate entities are automatically persisted to the database (or at least to cache) before I call any save() or update() method. To me this is a pretty strange default behavior but ok, as long as I can disable it, it´s fine. The problem I have is I want to update my entity´s state (from 1 to 2) only if the entity in the database still has the state it had when I retrieved [1] (this is to eliminate concurrency issues when another server is updating this same object). For this reason I have created a custom NamedQuery that will only update the entity if state is 1. So here is some pseudo-code: //Get the entity Entity item = dao.getEntity(); item.getState(); //==1 //Update the entity item.setState(2); //Here is the problem, this effectively changes the state of my entity braking my query that verifies that state is still == 1. dao.customUpdate(item); //Returns 0 rows changes since state != 1. So, how do I make sure the setters don´t change the state in cache/db? Thanks, Kristofer

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >