Search Results

Search found 5335 results on 214 pages for 'agile processes'.

Page 189/214 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • ActiveRecord exceptions not rescued

    - by zoopzoop
    I have the following code block: unless User.exist?(...) begin user = User.new(...) # Set more attributes of user user.save! rescue ActiveRecord::RecordInvalid, ActiveRecord::RecordNotUnique => e # Check if that user was created in the meantime user = User.exists?(...) raise e if user.nil? end end The reason is, as you can probably guess, that multiple processes might call this method at the same time to create the user (if it doesn't already exist), so while the first one enters the block and starts initializing a new user, setting the attributes and finally calling save!, the user might already be created. In that case I want to check again if the user exists and only raise the exception if it still doesn't (= if no other process has created it in the meantime). The problem is, that regularly ActiveRecord::RecordInvalid exceptions are raised from the save! and not rescued from the rescue block. Any ideas? EDIT: Alright, this is weird. I must be missing something. I refactored the code according to Simone's tip to look like this: unless User.find_by_email(...).present? # Here we know the user does not exist yet user = User.new(...) # Set more attributes of user unless user.save # User could not be saved for some reason, maybe created by another request? raise StandardError, "Could not create user for order #{self.id}." unless User.exists?(:email => ...) end end Now I got the following exception: ActiveRecord::RecordNotUnique: Mysql::DupEntry: Duplicate entry '[email protected]' for key 'index_users_on_email': INSERT INTO `users` ... thrown in the line where it says 'unless user.save'. How can that be? Rails thinks the user can be created because the email is unique but then the Mysql unique index prevents the insert? How likely is that? And how can it be avoided?

    Read the article

  • C# WinForms MultiThreading in Loop

    - by Goober
    Scenario I have a background worker in my application that runs off and does a bunch of processing. I specifically used this implementation so as to keep my User Interface fluid and prevent it from freezing up. I want to keep the background worker, but inside that thread, spawn off ONLY 3 MORE threads - making them share the processing (currently the worker thread just loops through and processes each asset one-by-one. However I would like to speed this up but using only a limited number of threads. Question Given the code below, how can I get the loop to choose a thread that is free, and then essentially wait if there isn't one free before it continues. CODE foreach (KeyValuePair<int, LiveAsset> kvp in laToHaganise) { Haganise h = new Haganise(kvp.Value, busDate, inputMktSet, outputMktSet, prodType, noOfAssets, bulkSaving); h.DoWork(); } Thoughts I'm guessing that I would have to start off by creating 3 new threads, but my concern is that if I'm instantiating a new Haganise object each time - how can I pass the correct "h" object to the correct thread..... Thread firstThread = new Thread(new ThreadStart(h.DoWork)); Thread secondThread =new Thread(new ThreadStart(h.DoWork)); Thread thirdThread = new Thread(new ThreadStart(h.DoWork)); Help greatly appreciated.

    Read the article

  • Perl : get substring which matches refex error

    - by Michael Mao
    Hi all: I am very new to Perl, so please bear with my simple question: Here is the sample output: Most successful agents in the Emarket climate are (in order of success): 1. agent10896761 ($-8008) 2. flightsandroomsonly ($-10102) 3. agent10479475hv ($-10663) Most successful agents in the Emarket climate are (in order of success): 1. agent10896761 ($-7142) 2. agent10479475hv ($-8982) 3. flightsandroomsonly ($-9124) I am interested only in agent names as well as their corresponding balances, so I am hoping to get the following output: agent10896761 -8008 flightsandroomsonly -10102 agent10479475hv -10663 agent10896761 -7142 agent10479475hv -8982 flightsandroomsonly -9124 For later processes. This is the code I've got so far: #!/usr/bin/perl -w open(MYINPUTFILE, $ARGV[0]); while(<MYINPUTFILE>) { my($line) = $_; chomp($line); # regex match test if($line =~ m/agent10479475/) { if($line =~ m/($-[0-9]+)/) { print "$1\n"; } } if($line =~ m/flightsandroomsonly/) { print "$line\n"; } } The second regex match has nothing wrong, 'cause that is printing out the whole line. However, for the first regex match, I've got some other output such like: $ ./compareResults.pl 3.txt 2. flightsandroomsonly ($-10102) 0479475 0479475 3. flightsandroomsonly ($-9124) 1. flightsandroomsonly ($-8053) 0479475 1. flightsandroomsonly ($-6126) 0479475 If I "escape" the braces like this if($line =~ m/\($-[0-9]+\)/) { print "$1\n"; } Then there is never a match for the first regex... So I stuck with a problem of making that particular regex work. Any hints for this? Many thanks in advance.

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

  • Relative Paths in .htaccess, how to attach to a variable?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Any implementation that allows me to existence check a directory is more than welcome.

    Read the article

  • How to run setInterval() on multiple canvases simultaneously?

    - by Alex
    I have a page which has several <canvas> elements. I am passing the canvas ID and an array of data to a function which then grabs the canvas info and passes the data onto a draw() function which in turn processes the given data and draws the results onto the canvas. So far, so good. Example data arrays; $(function() { setup($("#canvas-1"), [[110,110,100], [180,180,50], [220,280,80]]); setup($("#canvas-2"), [[110,110,100], [180,180,50], [220,280,80]]); }); setup function; function setup(canvas, data) { ctx = canvas[0].getContext('2d'); var i = data.length; var dimensions = { w : canvas.innerWidth(), h : canvas.innerHeight() }; draw(dimensions, data, i); } This works perfectly. draw() runs and each canvas is populated. However - I need to animate the canvas. As soon as I replace line 8 of the above example; draw(dimensions, data, i); with setInterval( function() { draw(dimensions, data, i); }, 33 ); It stops working and only draws the last canvas (with the others remaining blank). I'm new to both javascript and canvas so sorry if this is an obvious one, still feeling my way around. Guidance in the right direction much appreciated! Thanks.

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Idea for a small project, should I use Python?

    - by Robb
    I have a project idea, but unsure if using Python would be a good idea. Firstly, I'm a C++ and C# developer with some SQL experience. My day job is C++. I have a project idea i'd like to create and was considering developing it in a language I don't know. Python seems to be popular and has piqued my interests. I definitely use OOP in programming and I understand Python would work fine with that style. I could be way off on this, I've only read small bits and pieces about the language. The project won't be public or anything, just purely something of my own creation do dabble in at home. So the project would essentially represent a simple game idea I have. The game would consist roughly these things: Data structures to hold specific information (would be strongly typed). A way to output the gamestate for the players. This is completely up in the air, it can be graphical or text based, I don't really care at this point. A way to save off game data for the players in something like a database or file system. A relatively easy way for me to input information and a 'GO' button which processes the changes and obviously creates a new gamestate. The game would function similar to a board game. Really nothing out of the ordinary when I look back at that list. Would this be a fun way to learn Python or should I select another language?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • Make JFace Window blink in taskbar or get users attention?

    - by Sophomore
    Hi folks I wonder someone has any idea how to solve this: In my Java Eclipse plugin there are some processes which take some time. Therefore the user might minimize the window and let the process run in the background. Now, when the process is finished, I can force the window to come to the top again, but that is a no-no in usability. I'd rather want the process to blink in the taskbar instead. Is there any way to achieve this? I had a look at the org.eclipse.jface.window but could'nt find anything like that, same goes for the SWT documentation... Another thing which came to my mind - as people are using this app on mac os x and linux as well, is there a platform independant solution, which will inform the user that the process has finished but without bringing the window to the top? Any ideas are highly welcome! Edit: I found out that on windows the user can adjust whether he would like to enable a force to foreground or not. If that option is disabled, the task will just blink in the taskbar... Here's a good read on that... If anyone maybe knows about some platform independant way of achieving this kind of behaviour, please share your knowledge with me!

    Read the article

  • ASP.NET inline code in a server control

    - by John
    Ok, we had a problem come up today at work. It is a strange one that I never would have even thought to try. <form id="form1" runat="server" method="post" action="Default.aspx?id=<%= ID %>" > Ok, it is very ugly and I wouldn't have ever tried it myself. It came up in some code that was written years ago but had been working up until this weekend after a bunch of updates were installed on a client's web server where the code is hosted. The actual result of this is the following html: <form name="form1" method="post" action="Default.aspx?id=&lt;%= ID %>" id="form1"> The url ends up like this: http://localhost:6735/Default.aspx?id=<%= ID %> Which as you can see, demonstrates that the "<" symbol is being encoded before ASP.NET actually processes the page. It seems strange to me as I thought that even though it is not pretty by any means, it should work. I'm confused. To make matters worse, the client insists that it is a bug in IE since it appears to work in Firefox. In fact, it is broken in Firefox as well, except for some reason Firefox treats it as a 0. Any ideas on why this happens and how to fix it easily? Everything I try to render within the server control ends up getting escaped. Edit Ok, I found a "fix" <form id="form1" runat="server" method="post" action='<%# String.Format("Default.aspx?id={0}", 5) %>' > But that requires me to call DataBind which is adding more of a hack to the original hack. Guess if nobody thinks of anything else I'll have to go with that.

    Read the article

  • Windows Service doesn't start process with different credentials

    - by Marcus
    I have a Windows Service, running as a user, that should start several processes under different user credentials. I'm using the following code to start a process: Dim winProcess As New System.Diagnostics.Process With winProcess .StartInfo.Arguments = "some_args" .StartInfo.CreateNoWindow = True .StartInfo.ErrorDialog = False .StartInfo.FileName = "C:\TEMP\ProcessFromService\ProcessFromService\bin\Debug\ProcessFromService.exe" .StartInfo.UseShellExecute = False .StartInfo.WindowStyle = ProcessWindowStyle.Hidden 'Opgave WorkingDirectory kan soms tot problemen leiden, indien betreffende directory 'niet bereikbaar (rechten) is voor opgegeven gebruiker. 'Beter dus om deze niet op te geven. '.StartInfo.WorkingDirectory = My.Computer.FileSystem.SpecialDirectories.Temp .StartInfo.Domain = "" .StartInfo.UserName = "MyUserId" Dim strPassword As String = "MyPassword" Dim ssPassword As New Security.SecureString For Each chrPassword As Char In strPassword.ToCharArray ssPassword.AppendChar(chrPassword) Next .StartInfo.Password = ssPassword .Start() End With The process is correctly started when I use the same credentials as of which the Windows Service is running under. The process is not started, without any error, when I use different credentials. In other words: If the Windows Service is running as UserA then I can start a process running as UserA. If the Windows Service is running as UserB then I can not start a process running as UserA. I have created a test project in which I can reproduce this problem. If you put this project in C:\Temp then the used paths will be correct. You can download this test project here: https://dl.dropboxusercontent.com/u/5391091/ProcessFromService.zip NB: I hope this info is enough to explain it. If you need more info, please let me know and I will add it.

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >