Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 307/375 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • UpdateModelFromDatabaseException when trying to add a table to Entity Framework model

    - by Agent_9191
    I'm running into a weird issue with Entity Framework in .NET 3.5 SP1 within Visual Studio 2008. I created a database with a few tables in SQL Server and then created the associated .edmx Entity Framework model and had no issues. I then created a new table in the database that has a foreign key to an existing table and needed to be added to the .edmx. So I opened the .edmx in Visual Studio and in the models right-clicked and chose "Update Model From Database...". I saw the new table in the "add" tab, so I checked it and clicked finish. However I get an error message with the following text: --------------------------- Microsoft Visual Studio --------------------------- An exception of type 'Microsoft.Data.Entity.Design.Model.Commands.UpdateModelFromDatabaseException' occurred while attempting to update from the database. The exception message is: 'Cannot update from the database. Cannot resolve the Name Target for ScalarProperty 'ID <==> CustomerID'.'. --------------------------- OK --------------------------- For reference, here's the tables seem to be the most pertinent to the error. CustomerPreferences already exists in the .edmx. Diets is the table that was added afterwards and trying to add to the .edmx. CREATE TABLE [dbo].[CustomerPreferences]( [ID] [uniqueidentifier] NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[Diets]( [ID] [uniqueidentifier] NOT NULL, [CustomerID] [uniqueidentifier] NOT NULL, [Description] [nvarchar](50) NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Diets] WITH CHECK ADD CONSTRAINT [FK_Diets_CustomerPreferences] FOREIGN KEY([CustomerID]) REFERENCES [dbo].[CustomerPreferences] ([ID]) GO ALTER TABLE [dbo].[Diets] CHECK CONSTRAINT [FK_Diets_CustomerPreferences] GO This seems like a fairly common use case, so I'm not sure where I'm going wrong.

    Read the article

  • Array's index and argc signedness

    - by tusbar
    Hello, The C standard (5.1.2.2.1 Program startup) says: The function called at program startup is named main. [...] It shall be de?ned with a return type of int and with no parameters: int main(void) { /* ... */ } or with two parameters [...] : int main(int argc, char *argv[]) { /* ... */ } And later says: The value of argc shall be nonnegative. Why shouldn't argc be defined as an unsigned int, argc supposedly meaning 'argument count'? Should argc be used as an index for argv? So I started wondering if the C standard says something about the type of array's index. Is it signed? 6.5.2.1 Array subscripting: One of the expressions shall have type ‘‘pointer to object type’’, the other expression shall have integer type, and the result has type ‘‘type’’. It doesn't say anything about its signedness (or I didn't find it). It is pretty common to see codes using negatives array indexes (array[-1]) but isn't it undefined behavior? Should array's indexes be unsigned?

    Read the article

  • Why doesn't g++ pay attention to __attribute__((pure)) for virtual functions?

    - by jchl
    According to the GCC documentation, __attribute__((pure)) tells the compiler that a function has no side-effects, and so it can be subject to common subexpression elimination. This attribute appears to work for non-virtual functions, but not for virtual functions. For example, consider the following code: extern void f( int ); class C { public: int a1(); int a2() __attribute__((pure)); virtual int b1(); virtual int b2() __attribute__((pure)); }; void test_a1( C *c ) { if( c->a1() ) { f( c->a1() ); } } void test_a2( C *c ) { if( c->a2() ) { f( c->a2() ); } } void test_b1( C *c ) { if( c->b1() ) { f( c->b1() ); } } void test_b2( C *c ) { if( c->b2() ) { f( c->b2() ); } } When compiled with optimization enabled (either -O2 or -Os), test_a2() only calls C::a2() once, but test_b2() calls b2() twice. Is there a reason for this? Is it because, even though the implementation in class C is pure, g++ can't assume that the implementation in every subclass will also be pure? If so, is there a way to tell g++ that this virtual function and every subclass's implementation will be pure?

    Read the article

  • Replacing certain words with links to definitions using Javascript

    - by adharris
    I am trying to create a glossary system which will get a list of common words and their definitions via ajax, then replace any occurrence of that word in certain elements (those with the useGlossary class) with a link to the full definition and provide a short definition on mouse hover. The way I am doing it works, but for large pages it takes 30-40 seconds, during which the page hangs. I would like to either decrease the time it takes to do the replacement or make it so that the replacement is running in the background without hanging the page. I am using jquery for most of the javascript, and Qtip for the mouse hover. Here is my existing slow code: $(document).ready(function () { $.get("fetchGlossary.cfm", null, glossCallback, "json"); }); function glossCallback(data) { $(".useGlossary").each(function() { var $this = $(this); for (var i in data) { $this.html($this.html().replace(new RegExp("\\b" + data[i].term + "\\b", "gi"), function(m) {return makeLink(m, data[i].def);})); } $this.find("a.glossary").qtip({ style: { name: 'blue', tip: true } }) }); } function makeLink(m, def) { return "<a class='glossary glossary" + m.replace(/\s/gi, "").toUpperCase() + "' href='reference/glossary.cfm' title='" + def + "'>" + m + "</a>"; } Thanks for any feedback/suggestions!

    Read the article

  • Number of characters recommended for a statement

    - by liaK
    Hi, I have been using Qt 4.5 and so do C++. I have been told that it's a standard practice to maintain the length of each statement in the application to 80 characters. Even in Qt creator we can make a right border visible so that we can know whether we are crossing the 80 characters limit. But my question is, Is it really a standard being followed? Because in my application, I use indenting and all, so it's quite common that I cross the boundary. Other cases include, there might be a error statement which will be a bit explanatory one and which is in an inner block of code, so it too will cross the boundary. Usually my variable names look bit lengthier so as to make the names meaningful. When I call the functions of the variable names, again I will cross. Function names will not be in fewer characters either. I agree a horizontal scroll bar shows up and it's quite annoying to move back and forth. So, for function calls including multiple arguments, when the boundary is reached I will make the forth coming arguments in the new line. But besides that, for a single statement (for e.g a very long error message which is in double quotes " " or like longfun1()->longfun2()->...) if I use an \ and split into multiple lines, the readability becomes very poor. So is it a good practice to have those statement length restrictions? If this restriction in statement has to be followed? I don't think it depends on a specific language anyway. I added C++ and Qt tags since if it might. Any pointers regarding this are welcome.

    Read the article

  • Checked and Unchecked operators don't seem to be working when...

    - by flockofcode
    1) Is UNCHECKED operator in effect only when expression inside UNCHECKED context uses an explicit cast ( such as byte b1=unchecked((byte)2000); ) and when conversion to particular type can happen implicitly? I’m assuming this since the following expression throws a compile time error: byte b1=unchecked(2000); //compile time error 2) a) Do CHECKED and UNCHECKED operators work only when resulting value of an expression or conversion is of an integer type? I’m assuming this since in the first example ( where double type is being converted to integer type ) CHECKED operator works as expected: double m = double.MaxValue; b=checked((byte)m); // reports an exception , while in second example ( where double type is being converted to a float type ) CHECKED operator doesn’t seem to be working. since it doesn't throw an exception: double m = double.MaxValue; float f = checked((float)m); // no exception thrown b) Why don’t the two operators also work with expressions where type of a resulting value is of floating-point type? 2) Next quote is from Microsoft’s site: The unchecked keyword is used to control the overflow-checking context for integral-type arithmetic operations and conversions I’m not sure I understand what exactly have expressions and conversions such as unchecked((byte)(100+200)); in common with integrals? Thank you

    Read the article

  • Create a function to a function that runs a for loop?

    - by user637364
    Hi, I have made some code that creates a red border around an image when the user click on to highlite thats the choesen one. But I want to erase previous or all border with a white border around all images before a new click is made on another image. My question is how do I activate a call to a function when a click is made and how would a function look in jQuery? I just whant to use the .css to change the border in perhaps a loop and change the id of the images? Can I mix common javascript with jQuery, or should it only be pure jQuery code in a script? This is a simplified part of the code, it contains "minibild_1" to "minibild_5" $(document).ready(function(){ $("#minibild_1").click(function(){ $("#minibild_1").css({"border":"2px solid #D00C33"}); $("#storbild").attr("src","../bilder/bilder_stora/{$row1-bild_1}.jpg"); }); $("#minibild_2").click(function(){ $("#minibild_2").css({"border":"2px solid #D00C33"}); $("#storbild").attr("src","../bilder/bilder_stora/{$row1-bild_2}.jpg"); }); });

    Read the article

  • Problem accessing private variables in jQuery like chainable design pattern

    - by novogeek
    Hi folks, I'm trying to create my custom toolbox which imitates jQuery's design pattern. Basically, the idea is somewhat derived from this post: http://stackoverflow.com/questions/2061501/jquery-plugin-design-pattern-common-practice-for-dealing-with-private-function (Check the answer given by "David"). So here is my toolbox function: (function(window){ var mySpace=function(){ return new PrivateSpace(); } var PrivateSpace=function(){ var testCache={}; }; PrivateSpace.prototype={ init:function(){ console.log('init this:', this); return this; }, ajax:function(){ console.log('make ajax calls here'); return this; }, cache:function(key,selector){ console.log('cache selectors here'); testCache[key]=selector; console.log('cached selector: ',testCache); return this; } } window.hmis=window.m$=mySpace(); })(window) Now, if I execute this function like: console.log(m$.cache('firstname','#FirstNameTextbox')); I get an error 'testCache' is not defined. I'm not able to access the variable "testCache" inside my cache function of the prototype. How should I access it? Basically, what I want to do is, I want to cache all my jQuery selectors into an object and use this object in the future.

    Read the article

  • Unicode version of base64 encoding/ decoding

    - by Yan Cheng CHEOK
    I am using base64 encoding/decoding from http://www.adp-gmbh.ch/cpp/common/base64.html It works pretty well with the following code. const std::string s = "I Am A Big Fat Cat" ; std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(s.c_str()), s.length()); std::string decoded = base64_decode(encoded); std::cout << _T("encoded: ") << encoded << std::endl; std::cout << _T("decoded: ") << decoded << std::endl; However, when comes to unicode namespace std { #ifdef _UNICODE typedef wstring tstring; #else typedef string tstring; #endif } const std::tstring s = _T("I Am A Big Fat Cat"); How can I still make use of the above function? Merely changing std::string base64_encode(unsigned TCHAR const* , unsigned int len); std::tstring base64_decode(std::string const& s); will not work correctly. (I expect base64_encode to return ASCII. Hence, std::string should be used instead of std::tstring)

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • C#: Determine Type for (De-)Serialization

    - by dbemerlin
    Hi, i have a little problem implementing some serialization/deserialization logic. I have several classes that each take a different type of Request object, all implementing a common interface and inheriting from a default implementation: This is how i think it should be: Requests interface IRequest { public String Action {get;set;} } class DefaultRequest : IRequest { public String Action {get;set;} } class LoginRequest : DefaultRequest { public String User {get;set;} public String Pass {get;set;} } Handlers interface IHandler<T> { public Type GetRequestType(); public IResponse HandleRequest(IModel model, T request); } class DefaultHandler<T> : IHandler<T> // Used as fallback if the handler cannot be determined { public Type GetRequestType() { return /* ....... how to get the Type of T? ((new T()).GetType()) ? .......... */ } public IResponse HandleRequest(IModel model, T request) { /* ... */ } } class LoginHandler : DefaultHandler<LoginRequest> { public IResponse HandleRequest(IModel mode, LoginRequest request) { } } Calling class Controller { public ProcessRequest(String action, String serializedRequest) { IHandler handler = GetHandlerForAction(action); IRequest request = serializer.Deserialize<handler.GetRequestType()>(serializedRequest); handler(this.Model, request); } } Is what i think of even possible? My current Solution is that each handler gets the serialized String and is itself responsible for deserialization. This is not a good solution as it contains duplicate code, the beginning of each HandleRequest method looks the same (FooRequest request = Deserialize(serializedRequest); + try/catch and other Error Handling on failed deserialization). Embedding type information into the serialized Data is not possible and not intended. Thanks for any Hints.

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • What is an appropriate way to separate lifecycle events in the logging system?

    - by Hanno Fietz
    I have an application with many different parts, it runs on OSGi, so there's the bundle lifecycles, there's a number of message processors and plugin components that all can die, can be started and stopped, have their setup changed etc. I want a way to get a good picture of the current system status, what components are up, which have problems, how long they have been running for etc. I think that logging, especially in combination with custom appenders (I'm using log4j), is a good part of the solution and does help ad-hoc analysis as well as live monitoring. Normally, I would classify lifecycle events as INFO level, but what I really want is to have them separate from what else is going on in INFO. I could create my own level, LIFECYCLE. The lifecycle events happen in various different areas and on various levels in the application hierarchy, also they happen in the same areas as other events that I want to separate them from. I could introduce some common lifecycle management and use that to distinguish the events from others. For instance, all components that have a lifecycle could implement a particular interface and I log by its name. Are there good examples of how this is done elsewhere? What are considerations?

    Read the article

  • C++ reference variable again!!!

    - by kumar_m_kiran
    Hi All, I think most would be surprised about the topic again, However I am referring to a book "C++ Common Knowledge: Essential Intermediate Programming" written by "Stephen C. Dewhurst". In the book, he quotes a particular sentence (in section under Item 5. References Are Aliases, Not Pointers), which is as below A reference is an alias for an object that already exists prior to the initialization of the reference. Once a reference is initialized to refer to a particular object, it cannot later be made to refer to a different object; a reference is bound to its initializer for its whole lifetime Can anyone please explain the context of "cannot later be made to refer to a different object" Below code works for me, #include <iostream> using namespace std; int main(int argc, char *argv[]) { int i = 100; int& ref = i; cout<<ref<<endl; int k = 2000; ref = k; cout<<ref<<endl; return 0; } Here I am referring the variable ref to both i and j variable. And the code works perfectly fine. Am I missing something? I have used SUSE10 64bit linux for testing my sample program. Thanks for your input in advance.

    Read the article

  • A better solution than element.Elements("Whatever").First()?

    - by codeka
    I have an XML file like this: <SiteConfig> <Sites> <Site Identifier="a" /> <Site Identifier="b" /> <Site Identifier="c" /> </Sites> </SiteConfig> The file is user-editable, so I want to provide reasonable error message in case I can't properly parse it. I could probably write a .xsd for it, but that seems kind of overkill for a simple file. So anyway, when querying for the list of <Site> nodes, there's a couple of ways I could do it: var doc = XDocument.Load(...); var siteNodes = from siteNode in doc.Element("SiteConfig").Element("SiteUrls").Elements("Sites") select siteNode; But the problem with this is that if the user has not included the <SiteUrls> node (say) it'll just throw a NullReferenceException which doesn't really say much to the user about what actually went wrong. Another possibility is just to use Elements() everywhere instead of Element(), but that doesn't always work out when coupled with calls to Attribute(), for example, in the following situation: var siteNodes = from siteNode in doc.Elements("SiteConfig") .Elements("SiteUrls") .Elements("Sites") where siteNode.Attribute("Identifier").Value == "a" select siteNode; (That is, there's no equivalent to Attributes("xxx").Value) Is there something built-in to the framework to handle this situation a little better? What I would prefer is a version of Element() (and of Attribute() while we're at it) that throws a descriptive exception (e.g. "Looking for element <xyz> under <abc> but no such element was found") instead of returning null. I could write my own version of Element() and Attribute() but it just seems to me like this is such a common scenario that I must be missing something...

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • Programming logic best practice - redundant checks

    - by eldblz
    I'm creating a large PHP project and I've a trivial doubt about how to proceed. Assume we got a class books, in this class I've the method ReturnInfo: function ReturnInfo($id) { if( is_numeric($id) ) { $query = "SELECT * FROM books WHERE id='" . $id . "' LIMIT 1;"; if( $row = $this->DBDrive->ExecuteQuery($query, $FetchResults=TRUE) ) { return $row; } else { return FALSE; } } else { throw new Exception('Books - ReturnInfo - id not valid.'); } } Then i have another method PrintInfo function PrintInfo($id) { print_r( $this->ReturnInfo($id) ); } Obviously the code sample are just for example and not actual production code. In the second method should I check (again) if id is numeric ? Or can I skip it because is already taken care in the first method and if it's not an exception will be thrown? Till now I always wrote code with redundant checks (no matter if already checked elsewhere i'll check it also here) Is there a best practice? Is just common sense? Thank you in advance for your kind replies.

    Read the article

  • How can I prevent infinite recursion when using events to bind UI elements to fields?

    - by Billy ONeal
    The following seems to be a relatively common pattern (to me, not to the community at large) to bind a string variable to the contents of a TextBox. class MyBackEndClass { public event EventHandler DataChanged; string _Data; public string Data { get { return _Data; } set { _Data = value; //Fire the DataChanged event } } } class SomeForm : // Form stuff { MyBackEndClass mbe; TextBox someTextBox; SomeForm() { someTextBox.TextChanged += HandleTextBox(); mbe.DataChanged += HandleData(); } void HandleTextBox(Object sender, EventArgs e) { mbe.Data = ((TextBox)sender).Text; } void HandleData(Object sender, EventArgs e) { someTextBox.Text = ((MyBackEndClass) sender).Data; } } The problem is that changing the TextBox fires the changes the data value in the backend, which causes the textbox to change, etc. That runs forever. Is there a better design pattern (other than resorting to a nasty boolean flag) that handles this case correctly? EDIT: To be clear, in the real design the backend class is used to synchronize changes between multiple forms. Therefore I can't just use the SomeTextBox.Text property directly. Billy3

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • JavaScript for loop index strangeness

    - by pythonBOI
    I'm relatively new to JS so this may be a common problem, but I noticed something strange when dealing with for loops and the onclick function. I was able to replicate the problem with this code: <html> <head> <script type="text/javascript"> window.onload = function () { var buttons = document.getElementsByTagName('a'); for (var i=0; i<2; i++) { buttons[i].onclick = function () { alert(i); return false; } } } </script> </head> <body> <a href="">hi</a> <br /> <a href="">bye</a> </body> </html> When clicking the links I would expect to get '0' and '1', but instead I get '2' for both of them. Why is this? BTW, I managed to solve my particular problem by using the 'this' keyword, but I'm still curious as to what is behind this behavior.

    Read the article

  • [c++] accessing the hidden 'this' pointer

    - by Kyle
    I have a GUI architecture wherein elements fire events like so: guiManager->fireEvent(BUTTON_CLICKED, this); Every single event fired passes 'this' as the caller of the event. There is never a time I dont want to pass 'this', and further, no pointer except for 'this' should ever be passed. This brings me to a problem: How can I assert that fireEvent is never given a pointer other than 'this', and how can I simplify (and homogenize) calls to fireEvent to just: guiManager->fireEvent(BUTTON_CLICKED); At this point, I'm reminded of a fairly common compiler error when you write something like this: class A { public: void foo() {} }; class B { void oops() { const A* a = new A; a->foo(); } }; int main() { return 0; } Compiling this will give you ../src/sandbox.cpp: In member function ‘void B::oops()’: ../src/sandbox.cpp:7: error: passing ‘const A’ as ‘this’ argument of ‘void A::foo()’ discards qualifiers because member functions pass 'this' as a hidden parameter. "Aha!" I say. This (no pun intended) is exactly what I want. If I could somehow access the hidden 'this' pointer, it would solve both issues I mentioned earlier. The problem is, as far as I know you can't (can you?) and if you could, there would be outcries of "but it would break encapsulation!" Except I'm already passing 'this' every time, so what more could it break. So, is there a way to access the hidden 'this', and if not are there any idioms or alternative approaches that are more elegant than passing 'this' every time?

    Read the article

  • Help regarding multi-threading in MFC,please help me firends!

    - by kiddo
    Hello all,in my application there is a small part of function,in which it will read files to get some information,the number of filecount would be utleast 50,So I thought of implementing threading.Say if the user is giving 50 files,I wanted to separate it as 5 *10, 5 thread should be created,so that each thread can handle 10 files which can speed up the process.And also from the below code you can see that some variables are common.I read some articles about threading and I am aware that only one thread should access a variable/contorl at a me(CCriticalStiuation can be used for that).For me as a beginner,I am finding hard to imlplement what I have learned about threading.Somebody please give me some idea with code shown below..thanks in advance file read function:// void CMyClass::GetWorkFilesInfo(CStringArray& dataFilesArray,CString* dataFilesB, int* check,DWORD noOfFiles,LPWSTR path) { CString cFilePath; int cIndex =0; int exceptionInd = 0; wchar_t** filesForWork = new wchar_t*[noOfFiles]; int tempCheck; int localIndex =0; for(int index = 0;index < noOfFiles; index++) { tempCheck = *(check + index); if(tempCheck == NOCHECKBOX) { *(filesForWork+cIndex) = new TCHAR[MAX_PATH]; wcscpy(*(filesForWork+cIndex),*(dataFilesB +index)); cIndex++; } else//CHECKED or UNCHECKED { dataFilesArray.Add(*(dataFilesB+index)); *(check + localIndex) = *(check + index); localIndex++; } } WorkFiles(&cFilePath,dataFilesArray,filesForWork, path, cIndex); dataFilesArray.Add(cFilePath); *(check + localIndex) = CHECKED; }

    Read the article

  • Linq to SQL Intersect help needed

    - by mohang
    Hi, I have tried various suggestions given in SO. I still did not get the answers needed. Kindly help me. I appreciate your help. I have two sets. I need help to get the linq to sql intersection done. I have two sets. IQueryable<BusinessEntity> firstSet = from ent in all entities where ... // Code to get the first set. IQueryable<BusinessEntity> secondSet = from ent in all entities where... // Code to get the second set. Now I want the intersection, that is common elements of these sets. I have tried various ways including the following and I did not get the result I wanted. Please help me to get the right result. var commonEntities = (from ent1 in firstSet from ent2 in secondSet where ent1.BusinessEntityId == ent2.BusinessEntityId select ent1);

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >