Search Results

Search found 303 results on 13 pages for 'conceptual'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • Rationale of C# iterators design (comparing to C++)

    - by macias
    I found similar topic: http://stackoverflow.com/questions/56347/iterators-in-c-stl-vs-java-is-there-a-conceptual-difference Which basically deals with Java iterator (which is similar to C#) being unable to going backward. So here I would like to focus on limits -- in C++ iterator does not know its limit, you have by yourself compare the given iterator with the limit. In C# iterator knows more -- you can tell without comparing to any external reference, if the iterator is valid or not. I prefer C++ way, because once having iterator you can set any iterator as a limit. In other words if you would like to get only few elements instead of entire collection, you don't have to alter the iterator (in C++). For me it is more "pure" (clear). But of course MS knew this and C++ while designing C#. So what are the advantages of C# way? Which approach is more powerful (which leads to more elegant functions based on iterators). What do I miss? If you have thoughts on C# vs. C++ iterators design other than their limits (boundaries), please also answer. Note: (just in case) please, keep the discussion strictly technical. No C++/C# flamewar.

    Read the article

  • Passing Extras and screen rotation

    - by Luis A. Florit
    This kind of questions appear periodically. Sorry if this has been covered before, but I'm a newbie and couldn't find the appropriate answer. It deals with the correct implementation of communication between classes and activities. I made a gallery app. It has 3 main activities: the Main one, to search for filenames using a pattern; a Thumb one, that shows all the images that matched the pattern as thumbnails in a gridview, and a Photo activity, that opens a full sized image when you click a thumb in Thumbs. I pass to the Photo activity via an Intent the filenames (an array), and the position (an int) of the clicked thumb in the gridview. This third Photo activity has only one view on it: a TouchImageView, that I adapted for previous/next switching and zooming according to where you shortclick on the image (left, right or middle). Moreover, I added a longclick listener to Photo to show EXIF info. The thing is working, but I am not happy with the implementation... Some things are not right. One of the problems I am experiencing is that, if I click on the right of the image to see the next in the Photo activity, it switches fine (position++), but when rotating the device the original one at position appears. What is happening is that Photo is destroyed when rotating the image, and for some reason it restarts again, without obeying super.onCreate(savedInstanceState), loading again the Extras (the position only changed in Photo, not on the parent activities). I tried with startActivityForResult instead of startActivity, but failed... Of course I can do something contrived to save the position data, but there should be something "conceptual" that I am not understanding about how activities work, and I want to do this right. Can someone please explain me what I am doing wrong, which is the best method to implement what I want, and why? Thanks a lot!!!

    Read the article

  • What do you call a set of Javascript closures that share a common context?

    - by Ed Stauff
    I've been trying to learn closures (in Javascript), which kind of hurts my brain after way too many years with C# and C++. I think I now have a basic understanding, but one thing bothers me: I've visited lots of websites in this Quest for Knowledge, and nowhere have I seen a word (or even a simple two-word phrase) that means "a set of Javascript closures that share a common execution context". For example: function CreateThingy (name, initialValue) { var myName = name; var myValue = initialValue; var retObj = new Object; retObj.getName = function() { return myName; } retObj.getValue = function() { return myValue; } retObj.setValue = function(newValue) { myValue = newValue; } return retObj; }; From what I've read, this seems to be one common way of implementing data hiding. The value returned by CreateThingy is, of course, an object, but what would you call the set of functions which are properties of that object? Each one is a closure, but I'd like a name I can used to describe (and think about) all of them together as one conceptual entity, and I'd rather use a commonly accepted name than make one up. Thanks! -- Ed

    Read the article

  • Communicating with a running python daemon

    - by hanksims
    I wrote a small Python application that runs as a daemon. It utilizes threading and queues. I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health. In a nutshell, I'd like to be able to do something like this: python application.py start # launches the daemon Later, I'd like to be able to come along and do something like: python application.py check_queue_size # return info from the daemonized process To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals. Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it. UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up. Thanks again.

    Read the article

  • Instantiate a form, then find it later, without showing it initially

    - by awilson53
    I am having a problem that is strange to me but hopefully is not so strange to someone else. : ) Some background: I am working on a simple IM client that allows the user to broadcast messages to multiple recipients. The goal is to create a chat form for each of the recipients containing the text of the broadcast message, then show that form only if the recipient responds to the broadcast-er. However, when the application receives a response then attempts to locate the form for that particular chat session (using Application.OpenForms) it cannot find it UNLESS I .Show at the time it is created. I would like to avoid having to show this form when it is created because this means that the user will see a flash on the screen. The form doesn't seem to really be created until I show it, but it would seem there has to be a way to do this without showing first. Can anyone assist? I can provide code snippets if needed, I didn't in this post because this feels more like a conceptual misunderstanding on my part than a bug in the code. Thanks in advance!

    Read the article

  • Permuting output of a tree of closures

    - by yan
    This a conceptual question on how one would implement the following in Lisp (assuming Common Lisp in my case, but any dialect would work). Assume you have a function that creates closures that sequentially iterate over an arbitrary collection (or otherwise return different values) of data and returns nil when exhausted, i.e. (defun make-counter (up-to) (let ((cnt 0)) (lambda () (if (< cnt up-to) (incf cnt) nil)))) CL-USER> (defvar gen (make-counter 3)) GEN CL-USER> (funcall gen) 1 CL-USER> (funcall gen) 2 CL-USER> (funcall gen) 3 CL-USER> (funcall gen) NIL CL-USER> (funcall gen) NIL Now, assume you are trying to permute a combinations of one or more of these closures. How would you implement a function that returns a new closure that subsequently creates a permutation of all closures contained within it? i.e.: (defun permute-closures (counters) ......) such that the following holds true: CL-USER> (defvar collection (permute-closures (list (make-counter 3) (make-counter 3)))) CL-USER> (funcall collection) (1 1) CL-USER> (funcall collection) (1 2) CL-USER> (funcall collection) (1 3) CL-USER> (funcall collection) (2 1) ... and so on. The way I had it designed originally was to add a 'pause' parameter to the initial counting lambda such that when iterating you can still call it and receive the old cached value if passed ":pause t", in hopes of making the permutation slightly cleaner. Also, while the example above is a simple list of two identical closures, the list can be an arbitrarily-complicated tree (which can be permuted in depth-first order, and the resulting permutation set would have the shape of the tree.). I had this implemented, but my solution wasn't very clean and am trying to poll how others would approach the problem. Thanks in advance.

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • Dependency Injection: How to pass DB around?

    - by Stephane
    Edit: This is a conceptual question first and foremost. I can make applications work without knowing this, but I'm trying to learn the concept. I've seen lots of videos with related classes and that makes sense, but when it comes to classes wrapping around other classes, I can't seem to grasp where things should be instantiated/passed around. =-=-=-=-=-=-= Question: Let's say I have a simple page that loads data from a table, manipulates the result and displays it. Simple. I'm going to use '=' for instantiating a class and '-' for passing a class in using constructor injection. It seems to me that the database has to be passed from one end of the application to the other which doesn't seem right. Here's how I would do it if I wanted to separate concerns: index =>Controller =>Model Layer =>Database =>DAO->Database I have this rule in my head that says I'm not supposed to create objects inside other objects. So what do I do with the Database? Or even the Model for that matter? I'm obviously missing something so basic about this. I would love a simplified example so that I can move forward in my code. I feel really hamstrung by this.

    Read the article

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • In C what is the difference between null and a new line character? Guys help please [migrated]

    - by Siddhartha Gurjala
    Whats the conceptual difference and similarity between NULL and a newline character i.e between '\0' and '\n' Explain their relevance for both integer and character data type variables and arrays? For reference here is an example snippets of a program to read and write a 2d char array PROGRAM CODE 1: int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); for(j=0;j<256;j++) { scanf("%c",&name[i][j]); if(name[i][j]=='\n') { name[i][j]='\0'; j=257; } } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } The above code is working good where as the same logic given with slight diff is not giving appropriate output. Here's the code PROGRAM CODE 2: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\0';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Here one more instance of same program not giving proper output given below PROGRAM CODE 3: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\n';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } name[i][i]='\0'; } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Why the program code 2 and program code 3 are not working as expected as that of the code 1?

    Read the article

  • Must-see sessions at TCUK11

    - by Roger Hart
    Technical Communication UK is probably the best professional conference I've been to. Last year, I spoke there on content strategy, and this year I'll be co-hosting a workshop on embedded user assistance. Obviously, I'd love people to come along to that; but there are some other sessions I'd like to flag up for anybody thinking of attending. Tuesday 20th Sept - workshops This will be my first year at the pre-conference workshop day, and I'm massively glad that our workshop hasn't been scheduled along-side the one I'm really interested in. My picks: It looks like you're embedding user assistance. Would you like help? My colleague Dom and I are presenting this one. It's our paen to Clippy, to the brilliant idea he represented, and the crashing failure he was. Less precociously, we'll be teaching embedded user assistance, Red Gate style. Statistics without maths: acquiring, visualising and interpreting your data This doesn't need to do anything apart from what it says on the tin in order to be gold dust. But given the speakers, I suspect it will. A data-informed approach is a great asset to technical communications, so I'd recommend this session to anybody event faintly interested. The speakers here have a great track record of giving practical, accessible introductions to big topics. Go along. Wednesday 21st Sept - day one There's no real need to recommend the keynote for a conference, but I will just point out that this year it's Google's Patrick Hofmann. That's cool. You know what else is cool: Focus on the user, the rest follows An intro to modelling customer experience. This is a really exciting area for tech comms, and potentially touches on one of my personal hobby-horses: the convergence of technical communication and marketing. It's all part of delivering customer experience, and knowing what your users need lets you help them, sell to them, and delight them. Content strategy year 1: a tale from the trenches It's often been observed that content strategy is great at banging its own drum, but not so hot on compelling case studies. Here you go, folks. This is the presentation I'm most excited about so far. On a mission to communicate! Skype help their users communicate, but how do they communicate with them? I guess we'll find out. Then there's the stuff that I'm not too excited by, but you might just be. The standards geeks and agile freaks can get together in a presentation on the forthcoming ISO standards for agile authoring. Plus, there's a session on VBA for tech comms. I do have one gripe about day 1. The other big UK tech comms conference, UA Europe, have - I think - netted the more interesting presentation from Ellis Pratt. While I have no doubt that his TCUK case study on producing risk assessments will be useful, I'd far rather go to his talk on game theory for tech comms. Hopefully UA Europe will record it. Thursday 22nd Sept - day two Day two has a couple of slots yet to be confirmed. The rumour is that one of them will be the brilliant "Questions and rants" session from last year. I hope so. It's not ranting, but I'll be going to: RTFMobile: beyond stating the obvious Ultan O'Broin is an engaging speaker with a lot to say, and mobile is one of the most interesting and challenging new areas for tech comms. Even if this weren't a research-based presentation from a company with buckets of technology experience, I'd be going. It is, and you should too. Pattern recognition for technical communicators One of the best things about TCUK is the tendency to include sessions that tackle the theoretical and bring them towards the practical. Kai and Chris delivered cracking and well-received talks last year, and I'm looking forward to seeing what they've got for us on some of the conceptual underpinning of technical communication. Developing an interactive non-text learning programme Annoyingly, this clashes with Pattern Recognition, so I hope at least one of the streams is recorded again this year. The idea of communicating complex information without words us fascinating and this sounds like a great example of this year's third stream: "anything but text". For the localization and DITA crowds, there's rich pickings on day two, though I'm not sure how many of those sessions I'm interested in. In the 13:00 - 13:40 slot, there's an interesting clash between Linda Urban on re-use and training content, and a piece on minimalism I'm sorely tempted by. That's my pick of #TCUK11. I'll be doing a round-up blog after the event, and probably talking a bit more about it beforehand. I'm also reliably assured that there are still plenty of tickets.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Pirates, Treasure Chests and Architectural Mapping

    Pirate 1: Why do pirates create treasure maps? Pirate 2: I do not know.Pirate 1: So they can find their gold. Yes, that was a bad joke, but it does illustrate a point. Pirates are known for drawing treasure maps to their most prized possession. These documents detail the decisions pirates made in order to hide and find their chests of gold. The map allows them to trace the steps they took originally to hide their treasure so that they may return. As software engineers, programmers, and architects we need to treat software implementations much like our treasure chest. Why is software like a treasure chest? It cost money, time,  and resources to develop (Usually) It can make or save money, time, and resources (Hopefully) If we operate under the assumption that software is like a treasure chest then wouldn’t make sense to document the steps, rationale, concerns, and decisions about how it was designed? Pirates are notorious for documenting where they hide their treasure.  Shouldn’t we as creators of software do the same? By documenting our design decisions and rationale behind them will help others be able to understand and maintain implemented systems. This can only be done if the design decisions are correctly mapped to its corresponding implementation. This allows for architectural decisions to be traced from the conceptual model, architectural design and finally to the implementation. Mapping gives software professional a method to trace the reason why specific areas of code were developed verses other options. Just like the pirates we need to able to trace our steps from the start of a project to its implementation,  so that we will understand why specific choices were chosen. The traceability of a software implementation that actually maps back to its originating design decisions is invaluable for ensuring that architectural drifting and erosion does not take place. The drifting and erosion is prevented by allowing others to understand the rational of why an implementation was created in a specific manor or methodology The process of mapping distinct design concerns/decisions to the location of its implemented is called traceability. In this context traceability is defined as method for connecting distinctive software artifacts. This process allows architectural design models and decisions to be directly connected with its physical implementation. The process of mapping architectural design concerns to a software implementation can be very complex. However, most design decision can be placed in  a few generalized categories. Commonly Mapped Design Decisions Design Rationale Components and Connectors Interfaces Behaviors/Properties Design rational is one of the hardest categories to map directly to an implementation. Typically this rational is mapped or document in code via comments. These comments consist of general design decisions and reasoning because they do not directly refer to a specific part of an application. They typically focus more on the higher level concerns. Components and connectors can directly be mapped to architectural concerns. Typically concerns subdivide an application in to distinct functional areas. These functional areas then can map directly back to their originating concerns.Interfaces can be mapped back to design concerns in one of two ways. Interfaces that pertain to specific function definitions can be directly mapped back to its originating concern(s). However, more complicated interfaces require additional analysis to ensure that the proper mappings are created. Depending on the complexity some Behaviors\Properties can be translated directly into a generic implementation structure that is ready for business logic. In addition, some behaviors can be translated directly in to an actual implementation depending on the complexity and architectural tools used. Mapping design concerns to an implementation is a lot of work to maintain, but is doable. In order to ensure that concerns are mapped correctly and that an implementation correctly reflects its design concerns then one of two standard approaches are usually used. All Changes Come From ArchitectureBy forcing all application changes to come through the architectural model prior to implementation then the existing mappings will be used to locate where in the implementation changes need to occur. Allow Changes From Implementation Or Architecture By allowing changes to come from the implementation and/or the architecture then the other area must be kept in sync. This methodology is more complex compared to the previous approach.  One reason to justify the added complexity for an application is due to the fact that this approach tends to detect and prevent architectural drift and erosion. Additionally, this approach is usually maintained via software because of the complexity. Reference:Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons  

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • Runge-Kutta (RK4) integration for game physics

    - by Kai
    Gaffer on Games has a great article about using RK4 integration for better game physics. The implementation is straightforward but the math behind it confuses me. I understand derivatives and integrals on a conceptual level but I haven't manipulated equations in a long time. Here's the brunt of Gaffer's implementation: void integrate(State &state, float t, float dt) { Derivative a = evaluate(state, t, 0.0f, Derivative()); Derivative b = evaluate(state, t+dt*0.5f, dt*0.5f, a); Derivative c = evaluate(state, t+dt*0.5f, dt*0.5f, b); Derivative d = evaluate(state, t+dt, dt, c); const float dxdt = 1.0f/6.0f * (a.dx + 2.0f*(b.dx + c.dx) + d.dx); const float dvdt = 1.0f/6.0f * (a.dv + 2.0f*(b.dv + c.dv) + d.dv) state.x = state.x + dxdt * dt; state.v = state.v + dvdt * dt; } Can anybody explain in simple terms how RK4 works? Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? After reading the accepted answer below, and several other articles, I have a grasp on how RK4 works. To answer my own questions: Can anybody explain in simple terms how RK4 works? RK4 takes advantage of the fact that we can get a much better approximation of a function if we use its higher-order derivatives rather than just the first or second derivative. That's why the Taylor series converges much faster than Euler approximations. (take a look at the animation on the right side of that page) Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? The Runge-Kutta method is an approximation of a function that samples derivatives of several points within a timestep, unlike the Taylor series which only samples derivatives of a single point. After sampling these derivatives we need to know how to weigh each sample to get the closest approximation possible. An easy way to do this is to pick constants that coincide with the Taylor series, which is how the constants of a Runge-Kutta equation are determined. This article made it clearer for me: http://web.mit.edu/10.001/Web/Course%5FNotes/Differential%5FEquations%5FNotes/node5.html. Notice how (15) is the Taylor series expansion while (17) is the Runge-Kutta derivation. How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep? Mathematically it converges much faster than doing many Euler approximations. Of course, with enough Euler approximations we can gain equal accuracy to RK4, but the computational power needed doesn't justify using Euler.

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Defining an Entity Framework 1:1 association

    - by Craig Fisher
    I'm trying to define a 1:1 association between two entities (one maps to a table and the other to a view - using DefinedQuery) in an Entity Framework model. When trying to define the mapping for this in the designer, it makes me choose the (1) table or view to map the association to. What am I supposed to choose? I can choose either of the two tables but then I am forced to choose a column from that table (or view) for each end of the relationship. I would expect to be able to choose a column from one table for one end of the association and a column from the other table for the other end of the association, but there's no way to do this. Here I've chosen to map to the "DW_ WF_ClaimInfo" view and it is forcing me to choose two columns from that view - one for each end of the relationship. I've also tried defining the mapping manually in the XML as follows: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="DOCUMENT" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> But this gives: Error 2010: The Column 'DOCUMENT' specified as part of this MSL does not exist in MetadataWorkspace. Seems like it still expects both columns to come from the same table, which doesn't make sense to me. Furthermore, if I select the same key for each end, e.g.: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="PK_DocumentId" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> I then get: Error 3021: Problem in Mapping Fragment starting at line 675: Each of the following columns in table AssignedClaims is mapped to multiple conceptual side properties: AssignedClaims.PK_DocumentId is mapped to <AssignedClaimDW_WF_ClaimInfo.DW_WF_ClaimInfo.DOCUMENT, AssignedClaimDW_WF_ClaimInfo.AssignedClaim.PK_DocumentId> What am I not getting?

    Read the article

  • Bazaar newbie question about repository structures

    - by esc1729
    I want to use Bazaar on Windows XP for web-development and related tasks. Most of the files are edited locally and then transferred via FTP to the server. Just now the repository sits on my local workstation. Later on it should be shared locally with some co-workers. Perhaps we will use a local Linux server as a centralized repository, but this structure is not decided for now. But first I need to understand the impacts of the different repository setups, which I do not at all. Using Bazaar-Explorer on Windows XP I’ve created a ‘shared tree repository’ from the option list of the init-dialogue in some location dev-filter/. Bazaar Explorer tells me: Created repository with treeless branches at F:/bzr.local/dev-filter Created branch at F:/bzr.local/dev-filter/trunk Created working tree at F:/bzr.local/dev-filter/work OK so far. Now I move a bunch of files into the work directory and add and commit them as Rev 1 ‘Start Revision’. Then I work on some of these files and commit them again as Rev 2. Here my confusion starts. Shouldn’t both revisions go into the trunk? The trunk is still empty, beside the .bzr directory which only holds some management information. If I delete my working directory, which I have tried during these first experiments, everything is gone. There’s obviously no hidden storage of those files. OK. Perhaps I need to push it into the trunk? This does not work either. Entering the work/ directory and initializing the ‘push’ to the trunk, Bazaar-Explorer tells me No new revisions to push. So what? This looks like a severe conceptual misunderstanding about what should happen on my side. Edit, 2010-02-03: Some conclusions What I learned meanwhile is this: I think I should switch to the command line until I really understand what’s going on, at least for creating the repositories and branches. Bazaar Explorer introduces a new level of abstraction which I only can handle if I understand the level beneath One of the secrets of working with Bazaar at least for me is to understand those .bzr directories, their particular properties and states when created with ‘bzr init’, ‘bzr init-repository’, ‘bzr branch’ etc. in all their variants and how they are plumped together. While there’s a whole chapter of ‘Organizing your workspace’ in the Bazaar User Guide, it’s more or less workflow oriented. The manual contains a lot of directory structures for the given examples. What I would prefer beside this and have not (or only rudimentary) found so far is some graphical representation of those ‘Lego like’ .bzr building blocks which create the linking of all the parts. So I started to invent some simple notation while working through the examples and looking into the .bzr directories to document what information is stored there, where does it come from, how and to what is it linked, is it complete or shared, etc. Erich Schreiber

    Read the article

  • Crawling engine architecture - Java/ Perl integration

    - by Bigtwinz
    Hi all, I am looking to develop a management and administration solution around our webcrawling perl scripts. Basically, right now our scripts are saved in SVN and are manually kicked off by SysAdmin/devs etc. Everytime we need to retrieve data from new sources we have to create a ticket with business instructions and goals. As you can imagine, not an optimal solution. There are 3 consistent themes with this system: the retrieval of data has a "conceptual structure" for lack of a better phrase i.e. the retrieval of information follows a particular path we are only looking for very specific information so we dont have to really worry about extensive crawling for awhile (think thousands-tens of thousands of pages vs millions) crawls are url-based instead of site-based. As I enhance this alpha version to a more production-level beta I am looking to add automation and management of the retrieval of data. Additionally our other systems are Java (which I'm more proficient in) and I'd like to compartmentalize the perl aspects so we dont have to lean heavily on outside help. I've evaluated the usual suspects Nutch, Droid etc but the time spent on modifying those frameworks to suit our specific information retrieval cant be justified. So I'd like your thoughts regarding the following architecture. I want to create a solution which use Java as the interface for managing and execution of the perl scripts use Java for configuration and data access stick with perl for retrieval An example use case would be a data analyst delivers us a requirement for crawling perl developer creates the required script and uses this webapp to submit the script (which gets saved to the filesystem) the script gets kicked off from the webapp with specific parameters .... Webapp should be able to create multiple threads of the perl script to initiate multiple crawlers. So questions are what do you think how solid is integration between Java and Perl specifically from calling perl from java has someone used such a system which actually is part perl repository The goal really is to not have a whole bunch of unorganized perl scripts and put some management and organization on our information retrieval. Also, I know I can use perl do do the web part of what we want - but as I mentioned before - trying to keep perl focused. But it seems assbackwards I'm not adverse to making it an all perl solution. Open to any all suggestions and opinions. Thanks

    Read the article

  • GOTO still considered harmful?

    - by Kyle Cronin
    Everyone is aware of Dijkstra's Letters to the editor: go to statement considered harmful (also here .html transcript and here .pdf) and there has been a formidable push since that time to eschew the goto statement whenever possible. While it's possible to use goto to produce unmaintainable, sprawling code, it nevertheless remains in modern programming languages. Even the advanced continuation control structure in Scheme can be described as a sophisticated goto. What circumstances warrant the use of goto? When is it best to avoid? As a followup question: C provides a pair of functions, setjmp and longjmp, that provide the ability to goto not just within the current stack frame but within any of the calling frames. Should these be considered as dangerous as goto? More dangerous? Dijkstra himself regretted that title, of which he was not responsible for. At the end of EWD1308 (also here .pdf) he wrote: Finally a short story for the record. In 1968, the Communications of the ACM published a text of mine under the title "The goto statement considered harmful", which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a template: we would see all sorts of articles under the title "X considered harmful" for almost any X, including one titled "Dijkstra considered harmful". But what had happened? I had submitted a paper under the title "A case against the goto statement", which, in order to speed up its publication, the editor had changed into a "letter to the Editor", and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth. A well thought out classic paper about this topic, to be matched to that of Dijkstra, is Structured Programming with go to Statements (also here .pdf), by Donald E. Knuth. Reading both helps to reestablish context and a non-dogmatic understanding of the subject. In this paper, Dijkstra's opinion on this case is reported and is even more strong: Donald E. Knuth: I believe that by presenting such a view I am not in fact disagreeing sharply with Dijkstra's ideas, since he recently wrote the following: "Please don't fall into the trap of believing that I am terribly dogmatical about [the go to statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline!"

    Read the article

  • Is the design notion of layers contrived?

    - by Bruce
    Hi all I'm reading through Eric Evans' awesome work, Domain-Driven Design. However, I can't help feeling that the 'layers' model is contrived. To expand on that statement, it seems as if it tries to shoe-horn various concepts into a specific, neat model, that of layers talking to each other. It seems to me that the layers model is too simplified to actually capture the way that (good) software works. To expand further: Evans says: "Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below. Follow standard architectural patterns to provide loose coupling to the layers above." Maybe I'm misunderstanding what 'depends' means, but as far as I can see, it can either mean a) Class X (in the UI for example) has a reference to a concrete class Y (in the main application) or b) Class X has a reference to a class Y-ish object providing class Y-ish services (ie a reference held as an interface). If it means (a), then this is clearly a bad thing, since it defeats re-using the UI as a front-end to some other application that provides Y-ish functionality. But if it means (b), then how is the UI any more dependent on the application, than the application is dependent on the UI? Both are decoupled from each other as much as they can be while still talking to each other. Evans' layer model of dependencies going one way seems too neat. First, isn't it more accurate to say that each area of the design provides a module that is pretty much an island to itself, and that ideally all communication is through interfaces, in a contract-driven/responsibility-driven paradigm? (ie, the 'dependency only on lower layers' is contrived). Likewise with the domain layer talking to the database - the domain layer is as decoupled (through DAO etc) from the database as the database is from the domain layer. Neither is dependent on the other, both can be swapped out. Second, the idea of a conceptual straight line (as in from one layer to the next) is artificial - isn't there more a network of intercommunicating but separate modules, including external services, utility services and so on, branching off at different angles? Thanks all - hoping that your responses can clarify my understanding on this..

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • When I really need to use [NSThread sleepForTimeInterval:1];

    - by Timbo
    Hi there, I have a program that needs to use sleep. Like really needs to. In lieu of spending ages explaining why, suffice to say that it needs it. Now I'm told to split off my code into a separate thread if it requires sleep so I don't lose interface responsiveness, so I've started learning how to use NSThread. I've created a brand new program that is conceptual so to solve the issue for this example will help me in my real program. Short story is I have a class, it has instance variables and I need a loop with a sleep to be depended on the value of that instance variable. Here's what I've put together anyway, your help is very much appreciated :) Cheers Tim /// Start Test1ViewController.h /// #import <UIKit/UIKit.h> @interface Test1ViewController : UIViewController { UILabel* label; } @property (assign) IBOutlet UILabel *label; @end /// End Test1ViewController.h /// /// Start Test1ViewController.m /// #import "Test1ViewController.h" #import "MyClass.h" @implementation Test1ViewController @synthesize label; - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; label.text = @"1"; [NSThread detachNewThreadSelector:@selector(backgroundProcess) toTarget:self withObject:nil]; } - (void)backgroundProcess { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Create an instance of a class that will eventually store a whole load of variables MyClass *aMyClassInstance = [MyClass new]; [aMyClassInstance createMyClassInstance:(@"Timbo")]; while (aMyClassInstance.myVariable--) { NSLog(@"blah = %i",aMyClassInstance.myVariable); label.text = [NSString stringWithFormat:@"blah = %d", aMyClassInstance.myVariable]; //how do I pass the new value out to the updateLabel method, or reference aMyClassInstance.myVariable? [self performSelectorOnMainThread:@selector(updateLabel) withObject:nil waitUntilDone:NO]; //the sleeping of the thread is absolutely mandatory and must be worked around. The whole point of using NSThread is so I can have sleeps [NSThread sleepForTimeInterval:1]; } [pool release]; } - (void)updateLabel {label.text = [NSString stringWithFormat:@"blah = %d", aMyClassInstance.myVariable]; // be nice if i could } - (void)didReceiveMemoryWarning {[super didReceiveMemoryWarning];} - (void)viewDidUnload {} - (void)dealloc {[super dealloc];} @end /// End Test1ViewController.m /// /// Start MyClass.h /// #import <Foundation/Foundation.h> @interface MyClass : NSObject { NSString* name; int myVariable; } @property int myVariable; @property (assign) NSString *name; - (void) createMyClassInstance: (NSString*)withName; - (int) changeVariable: (int)toAmount; @end /// End MyClass.h /// /// Start MyClass.h /// #import "MyClass.h" @implementation MyClass @synthesize name, myVariable; - (void) createMyClassInstance: (NSString*)withName{ name = withName; myVariable = 10; } - (int) changeVariable: (int)toAmount{ myVariable = toAmount; return toAmount; } @end /// End MyClass.h ///

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >