Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 574/677 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • WPF Event Handler in Another Class

    - by Nathan Tornquist
    I have built a series of event handlers for some custom WPF controls. The event handles format the text displayed when the user enters or leaves a textbox based on the type of data contained (Phone number, zip code, monetary value, etc.) Right now I have all of the events locally in the C# code directly attached to the xaml. Because I have developed a could controls, this means that the logic is repeated a lot, and if I want to change the program-wide functionality I would have to make changes everywhere the event code is located. I am sure there is a way to put all of my event handlers in a single class. Can anyone help point me in the correct direction? I saw this article: Event Handler located in different class than MainWindow But I'm not sure if it directly relates to what I'm doing. I would rather make small changes to the existing logic that I have, as it works, then rewrite everything into commands. I would essentially like to something like this if possible: LostFocus="ExpandedTextBoxEvents.TextBox_LostFocus" It is easy enough to do something like this: private void TextBoxCurrencyGotFocus(object sender, RoutedEventArgs e) { ExpandedTextBoxEvents.TextBoxCurrencyGotFocus(sender, e); } private void TextBoxCurrencyLostFocus(object sender, RoutedEventArgs e) { ExpandedTextBoxEvents.TextBoxCurrencyLostFocus(sender, e); } But that is less elegant.

    Read the article

  • How to use AS keyword in MySql ?

    - by karthik
    In the below SP i will be getting result in One single column. How can i name the column of the output ? DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen` ( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Convert Array to Multidimensional Array

    - by Tim
    I like to convert a single array into a multidimensional array. This is what I get have web scraping a page, except it is not the end result that I am looking for. Change: Rooms: Array ( [0] => name [1] => value [2] => size [3] => &nbsp; [4] => name [5] => value [6] => size [7] => &nbsp; [8] => name [9] => value [10] => size [11] => &nbsp; [12] => name [13] => value [14] => size [15] => &nbsp; ) Into: Rooms: Array ( Room: Array ( [0] => name [1] => value [2] => size ), Room: Array ( [0] => name [1] => value [2] => size ), Room: Array ( [0] => name [1] => value [2] => size ) )

    Read the article

  • Who needs singletons?

    - by sexyprout
    Imagine you access your MySQL database via PDO. You got some functions, and in these functions, you need to access the database. The first thing I thought of is global, like: $db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'root', 'pwd'); function some_function() { global $db; $db->query('...'); } But it's considered as a bad practice. So, after a little search, I ended up with the Singleton pattern, which "applies to situations in which there needs to be a single instance of a class." According to the example of the manual, we should do this: class Database { private static $instance, $db; private function __construct(){} static function singleton() { if(!isset(self::$instance)) self::$instance = new __CLASS__; return self:$instance; } function get() { if(!isset(self::$db)) self::$db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'user', 'pwd') return self::$db; } } function some_function() { $db = Database::singleton(); $db->get()->query('...'); } some_function(); But I just can't understand why you need that big class when you can do it merely with: class Database { private static $db; private function __construct(){} static function get() { if(!isset(self::$rand)) self::$db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'user', 'pwd'); return self::$db; } } function some_function() { Database::get()->query('...'); } some_function(); This last one works perfectly and I don't need to worry about $db anymore. But maybe I'm forgetting something. So, who's wrong, who's right?

    Read the article

  • how can you have the same form handle by javascript multiple times on the same page?

    - by DeChamp
    I have a thumb gallery where I am using ajax/javascript to submit a form per image to report the image as broken seamlessly along with php. The form and script is templated so the script is in the header and then the form is printed multiple times on the same page with a hidden field with a different id for the value per thumb. So basically this is what i have. javascript in header just a quick idea of the forms i have. Just a quick idea not what I actually have. image1 followed by the form image2 followed by the form So when you hit the button it basically submits all of the forms at the same time. I am sure it can be fixed with a (this) or something like that so it only submits a single form at a time. Let me know please. $(function() { $(".submit").click(function() { var imgId = $("#imgId").val(); var dataString = 'imgId='+ imgId; if(imgId==''){ $('.success').fadeOut(200).hide(); $('.error').fadeIn(200).show(); $('.error').fadeOut(200).hide(); }else{ $.ajax({ type: "POST", url: "inc/brokenImgReport.php", data: dataString, success: function(){ }); $('.error').fadeOut(200).hide(); $('.success').fadeIn(200).show(); setTimeout(function() { $('.success').fadeOut(200); }, 2000); } return false; }); });

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • Javascript, IE, Strings, and Performance problems

    - by Infinity
    Hey guys, So we have this product, and it's really slow in IE. We've already applied a lot of the practices advised by the IE guys themselves (like this, and this), and try to sacrifice clean code for performance in the critical parts like DOM manipulation. However, as you can see in this IE profiler screenshot.. Just "String" is the biggest offender. Almost 750ms of exclusive time. Does this mean IE is spending 750ms just instantiating Strings? I also read this stuff on the Opera dev blog: A build script can remove whitespace, comments, replace strings with Array lookups (to avoid MSIE creating a string object for every single instance of a string — even in conditions) But no more info regarding this. Anyone can clarify? It seems like IE has to create a full String instance every time you have " " in your code, which could explain this, but I don't know what the array lookup optimization would look like. BTW- we don't really do much of string concatenation anywhere in the code. The library we use is MooTools 1.2.4 Any suggestions will be appreciated! Thx

    Read the article

  • Why do I have to specify pure virtual functions in the declaration of a derived class in Visual C++?

    - by neuviemeporte
    Given the base class A and the derived class B: class A { public: virtual void f() = 0; }; class B : public A { public: void g(); }; void B::g() { cout << "Yay!"; } void B::f() { cout << "Argh!"; } I get errors saying that f() is not declared in B while trying do define void B::f(). Do I have to declare f() explicitly in B? I think that if the interface changes I shouldn't have to correct the declarations in every single class deriving from it. Is there no way for B to get all the virtual functions' declarations from A automatically? EDIT: I found an article that says the inheritance of pure virtual functions is dependent on the compiler: http://www.objectmentor.com/resources/articles/abcpvf.pdf I'm using VC++2008, wonder if there's an option for this.

    Read the article

  • Setting Session/Cookie via ajax request made on other website

    - by user596805
    Hi, That's my problem: I have an website, example.com, in which index.html file a introduced a <script src="website.net/js.js"></script> You can see, that this is on other web server. In the js.js I have some data that I want to send to php. For that, I am using Ajax. So, I made a request to "website.net/data.php" using method get. In data.php file everything is ok,I received the value, but I want to set a cookie which value is what I received through ajax. Here is the problem. The setcookie function says that the cookie was set, but when I check in the browser, there's no cookie! It works fine if the index.html file where I use <script src="website.net/js.js"></script> is hosted on the same domain where I am making the request. If it is on another domain, it doesn't work anymore. I have read something about Ajax cross site, but I don't want to send something back to example.com. All I want is to send some data from example.com to website.net and then setting a cookie based on that value. Thank you very much, and sorry for my English! Later edit: I am not used with this website. From the example.net I take a single value. On website.net I receive that value, I check if it's not already a cookie set, if it's not, I set it. On the same page, website.net, I use this cookie too.

    Read the article

  • LINQ to SQL - Insert yielding strange behavior.

    - by Isaac
    Hi, I'm trying to insert several newly created items to the database. I have a LINQ2SQL generated class called "Order". Inside order, there's a property called "OrderItems" which is also generated by LINQ2SQL and represents the Items of that Order. So far so good. The problem I'm having right now, is when I try to add more than one newly created OrderItem inside Order. I.E: Order o = orderWorker.GetById( 10 ); for( int i=0; i < 5; ++i ) { OrderItem oi =new OrderItem { Order = order, Price = 100, ShippingPrice = 100, ShippingMethod = ... }; o.OrderItems.Add( oi ); } context.SubmitChanges(); Unfortunately, only a single entity is being added. Yes, I checked the generated SQL by adding Context.Log = Console.Out, and yes, only one statement was created. Any clues? By the way I know I'm not using InsertOnSubmit, by the documentation says: You can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet(TEntity) or set an EntityRef(TEntity) to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database. Thank you very much for your time.

    Read the article

  • How do I do a count on that meet a specific condition, dependent on several has_many relationships i

    - by Angela
    I have a Model Campaign. A Campaign has many Events. Each Event has an attribute :days. A Campaign also has_many Contacts. Each Contact as a :date_entered attribute. The from_today(contact,event) method returns a number, which is the number of days from the contact's :date_entered till today minus the event's :days. In other words, a positive number shows the number of days from today till the :days of the event is elapsed. If it is negative, if means that the number of days that has elapsed since the :date_entered is greater than the :days attribute of an event. In other words, the event is overdue. What I would like to be able to do is do campaign.overdue and this would result in a total number of contacts that have an overdue event. It shouldn't count multiple events for a single contact, just one contact. How do I do that? It seems like I would need to cycle through all the events for every contact and keep a counter but I'm assuming that there is a better way.

    Read the article

  • Why Is Apache Giving 403?

    - by ThinkCL
    I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows... client denied by server configuration: /the-path/the-file.php Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be. I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down. Any ideas?

    Read the article

  • Visual Studio 2008 linker wants all symbols to be resolved, not only used ones

    - by user343011
    We recently upgraded to Visual Studio 2008 from 2005, and I think those error started after that. In our solution, we have a multitude of projects. Many of those are utility projects, or projects containing core functionality used by other projects. The output of those is lib files that are linked to when building the projects generating the final binaries using the "Project dependencies..." option. One of the other projects---Let us call it ResultLib---generates a DLL, and it needs one single function from the core project. This function uses only static function from its own source file, but the project in its entirety uses a lot of low-level Windows functions and also imports a DLL---Let us call it Driver.dll. Our problem is that when building ExtLib, the linker complains about a multitude of unresolved externals, for example all functions exported from Driver.dll, since its lib file is not specified when linking. If we try to fix this by adding all lib files used by other projects that use all of the core project, our resulting ResultLib DLL ends up importing Driver.dll and also exporting all functions defined in it. How do we tell Visual Studio to only try to resolve symbols that are actually used?

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Splitting a string which contain multiple symbols to get specific values

    - by Eon Rusted du Plessis
    I cannot believe I am having trouble with this following string String filter = "name=Default;pattern=%%;start=Last;end=Now"; This is a short and possibly duplicate question, but how would I split this string to get: string Name = "Default"; string Pattern = "%%" ; string start = "Last" ; string end = "Now" ; Reason why I ask is my deadline is very soon, and this is literally the last thing I must do. I'm Panicking, and I'm stuck on this basic command. I tried: pattern = filter.Split(new string[] { "pattern=", ";" }, StringSplitOptions.RemoveEmptyEntries)[1]; //Gets the pattern startDate = filter.Split(new string[] { "start=", ";" }, StringSplitOptions.RemoveEmptyEntries)[1]; //Gets the start date I happen to get the pattern which I needed, but as soon as I try to split start, I get the value as "Pattern=%%" What can I do? Forgot to mention The list in this string which needs splitting may not be in any particular order . this is a single sample of a string which will be read out of a stringCollection (reading these filters from Properties.Settings.Filters

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • Using an initializer_list on a map of vectors

    - by Hooked
    I've been trying to initialize a map of <ints, vector<ints> > using the new 0X standard, but I cannot seem to get the syntax correct. I'd like to make a map with a single entry with key:value = 1:<3,4 #include <initializer_list> #include <map> #include <vector> using namespace std; map<int, vector<int> > A = {1,{3,4}}; .... It dies with the following error using gcc 4.4.3: error: no matching function for call to std::map<int,std::vector<int,std::allocator<int> >,std::less<int>,std::allocator<std::pair<const int,std::vector<int,std::allocator<int> > > > >::map(<brace-enclosed initializer list>) Edit Following the suggestion by Cogwheel and adding the extra brace it now compiles with a warning that can be gotten rid of using the -fno-deduce-init-list flag. Is there any danger in doing so?

    Read the article

  • Modeling many-to-one with constraints?

    - by Greg Beech
    I'm attempting to create a database model for movie classifications, where each movie could have a single classification from each of one of multiple rating systems (e.g. BBFC, MPAA). This is the current design, with all implied PKs and FKs: TABLE Movie ( MovieId INT ) TABLE ClassificationSystem ( ClassificationSystemId TINYINT ) TABLE Classification ( ClassificationId INT, ClassificationSystemId TINYINT ) TABLE MovieClassification ( MovieId INT, ClassificationId INT, Advice NVARCHAR(250) -- description of why the classification was given ) The problem is with the MovieClassification table whose constraints would allow multiple classifications from the same system, whereas it should ideally only permit either zero or one classifications from a given system. Is there any reasonable way to restructure this so that a movie having exactly zero or one classifications from any given system is enforced by database constraints, given the following requirements? Do not duplicate information that could be looked up (i.e. duplicating ClassificationSystemId in the MovieClassification table is not a good solution because this could get out of sync with the value in the Classification table) Remain extensible to multiple classification systems (i.e. a new classification system does not require any changes to the table structure)? Note also the Advice column - each mapping of a movie to a classification needs to have a textual description of why that classification was given to that movie. Any design would need to support this.

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • java web templates across multiple WAR files

    - by Casey
    I have a multi WAR web application that was designed badly. There is a single WAR that is responsible for handling some authorization against a database and defines a standard web page using a jsp taglib. The main WAR basically checks the privileges of the user and than based on that, displays links to the context path of the other deployed WARS. Each of the other deployed WARs includes this custom tag lib. I am working on redesigning this application, and one of the nice things that I want to retain is that we have other project teams that have developed these WAR modules that "plug into" our current system to take advantage of other things we have to offer. I am not entirely sure how to handle the page templates though. I need a templating system that would be easy enough to use across multiple wars (I was thinking of jsp fragments??). I really only need to define a consistent header and main navigation section. Whatever else is displayed on the page is up to the individual web project. Any suggestions? I hope that this is clear, if not I can elaborate more.

    Read the article

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • Efficient (basic) regular expression implementation for streaming data

    - by Brendan Dolan-Gavitt
    I'm looking for an implementation of regular expression matching that operates on a stream of data -- i.e., it has an API that allows a user to pass in one character at a time and report when a match is found on the stream of characters seen so far. Only very basic (classic) regular expressions are needed, so a DFA/NFA based implementation seems like it would be well-suited to the problem. Based on the fact that it's possible to do regular expression matching using a DFA/NFA in a single linear sweep, it seems like a streaming implementation should be possible. Requirements: The library should try to wait until the full string has been read before performing the match. The data I have really is streaming; there is no way to know how much data will arrive, it's not possible to seek forward or backward. Implementing specific stream matching for a couple special cases is not an option, as I don't know in advance what patterns a user might want to look for. For the curious, my use case is the following: I have a system which intercepts memory writes inside a full system emulator, and I would like to have a way to identify memory writes that match a regular expression (e.g., one could use this to find the point in the system where a URL is written to memory). I have found (links de-linkified because I don't have enough reputation): stackoverflow.com/questions/1962220/apply-a-regex-on-stream stackoverflow.com/questions/716927/applying-a-regular-expression-to-a-java-i-o-stream www.codeguru.com/csharp/csharp/cs_data/searching/article.php/c14689/Building-a-Regular-Expression-Stream-Search-with-the-NET-Framework.htm But all of these attempt to convert the stream to a string first and then use a stock regular expression library. Another thought I had was to modify the RE2 library, but according to the author it is architected around the assumption that the entire string is in memory at the same time. If nothing's available, then I can start down the unhappy path of reinventing this wheel to fit my own needs, but I'd really rather not if I can avoid it. Any help would be greatly appreciated!

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >