Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 85/674 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Hadoop/MapReduce: Reading and writing classes generated from DDL

    - by Dave
    Hi, Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL? I have defined some struct-like records using DDL. For example: class Customer { ustring FirstName; ustring LastName; ustring CardNo; long LastPurchase; } I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file. The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • Writing a VM - well formed bytecode?

    - by David Titarenco
    Hi, I'm writing a virtual machine in C just for fun. Lame, I know, but luckily I'm on SO so hopefully no one will make fun :) I wrote a really quick'n'dirty VM that reads lines of (my own) ASM and does stuff. Right now, I only have 3 instructions: add, jmp, end. All is well and it's actually pretty cool being able to feed lines (doing it something like write_line(&prog[1], "jmp", regA, regB, 0); and then running the program: while (machine.code_pointer <= BOUNDS && DONE != true) { run_line(&prog[machine.cp]); } I'm using an opcode lookup table (which may not be efficient but it's elegant) in C and everything seems to be working OK. My question is more of a "best practices" question but I do think there's a correct answer to it. I'm making the VM able to read binary files (storing bytes in unsigned char[]) and execute bytecode. My question is: is it the VM's job to make sure the bytecode is well formed or is it just the compiler's job to make sure the binary file it spits out is well formed? I only ask this because what would happen if someone would edit a binary file and screw stuff up (delete arbitrary parts of it, etc). Clearly, the program would be buggy and probably not functional. Is this even the VM's problem? I'm sure that people much smarter than me have figured out solutions to these problems, I'm just curious what they are!

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • Intermittent Issue Writing to Google Appengine Datastore

    - by user242153
    Hi, I have a functioning app and recently have had intermittent problems writing to the datastore. I did not make any relevant code changes, however in the last few days my attempts to write to the datastore sometimes work and sometimes don't. I am trying to save an object that is in a many to one relationship with an existing persisted parent. So, the logic works like this: 1) Parent pulled from the datastore 2) Child created / instantiated using constructor 3) Parent.addSingleChild(child); // the "addSingleChild" method just adds the object argument to the collection of children 4) child.setParent(Parent); // sets the Parent object to the parent field I am using transactions as explained in the documentation ending with "finally {if (tx.isActive()) {tx.rollback(); } }" When the servlet is called, the parent is called from the datastore and the child object is created and added to the many to one mapping to the pre-existing parent. The child should automatically be persisted, since the parent is already persistent, and the child is added to the collection of children that map to the parent. And it worked this way in the past. However, to be sure, i did add a pm.makePersistent(child). Doesn't seem to help, still have the intermittent problem. Any suggestions would be appreciated, and if you need to see the actual code I can post. Thanks

    Read the article

  • Writing to a new log file each day with TraceSource

    - by Cipher
    I am using a logger in my application to write to files. The source, switch and listeners have been defined in the app.config file as follows: <system.diagnostics> <sources> <source name="LoggerApp" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch"> <listeners> <add name="myListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="myListener.log" /> </listeners> </source> </sources> <switches> <add name="sourceSwitch" value="Information" /> </switches> </system.diagnostics> Inside, my .cs code, I use the logger as follows: private static TraceSource logger = new TraceSource("LoggerApp"); logger.TraceEvent(TraceEventType.Information, 1, "{0} : Started the application", DateTime.Now); What would I have to do to create a new log file each day instead of writing to the same log file every time?

    Read the article

  • Writing a plist

    - by iOS-Newbie
    I am trying to test out writing a dictionary to a plist. The following code does not report any errors, but I cannot find any trace of the file that I supposed wrote. Here is the code snippet: NSDictionary *myDictionary = [NSDictionary dictionaryWithObjectsAndKeys: @"First letter of the alphabet", @"A", @"Second letter of the alphabet", @"B", @"Third letter of the alphabet", @"C", nil ]; I can see the dictionary contents displayed properly with either method calls: NSLog(@"Here is my partial dictionary %@", myDictionary); for (NSString *key in myDictionary) NSLog(@"here it is again %@ %@", key, [myDictionary objectForKey:key]); The following code displays the "succeeded" message when the program is run repeatedly if ([myDictionary writeToFile: @"myDictionary" atomically:YES ] == NO) NSLog(@"write to file failed"); else NSLog(@"write to file succeeded"); even when changing the atomically: argument to NO to not write a temporary file. However, when I search my current directory, or even my entire Mac, I cannot find any file called "myDictionary.plist" or any file with the string "myDictionary". Isn't the path variable "@myDictionary" supposed to represent the file at the current directory, i.e. where the class executable resides?

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • Effective communication in a component-based system

    - by Tesserex
    Yes, this is another question about my game engine, which is coming along very nicely, with much thanks to you guys. So, if you watched the video (or didn't), the objects in the game are composed of various components for things like position, sprites, movement, collision, sounds, health, etc. I have several message types defined for "tell" type communication between entities and components, but this only goes so far. There are plenty of times when I just need to ask for something, for example an entity's position. There are dozens of lines in my code that look like this: SomeComponent comp = (SomeComponent)entity.GetComponent(typeof(SomeComponent)); if (comp != null) comp.GetSomething(); I know this is very ugly, and I know that casting smells of improper OO design. But as complex as things are, there doesn't seem to be a better way. I could of course "hard-code" my component types and just have SomeComponent comp = entity.GetSomeComponent(); but that seems like a cop-out, and a bad one. I literally JUST REALIZED, while writing this, after having my code this way for months with no solution, that a generic will help me. SomeComponent comp = entity.GetComponent<SomeComponent>(); Amazing how that works. Anyway, this is still only a semantic improvement. My questions remain. Is this actually that bad? What's a better alternative?

    Read the article

  • Ruby - Writing Hpricot data to a file

    - by John
    Hey everyone, I am currently doing some XML parsing and I've chosen to use Hpricot because of it's ease of use and syntax, however I am running into some problems. I need to write a piece of XML data that I have found out to another file. However, when I do this the format is not preserved. For example, if the content should look like this: <dict> <key>item1</key><value>12345</value> <key>item2</key><value>67890</value> <key>item3</key><value>23456</value> </dict> And assuming that there are many entries like this in the document. I am iterating through the 'dict' items by using hpricot_element = Hpricot(xml_document_body) f = File.new('some_new_file.xml') (hpricot_element/:dict).each { |dict| f.write( dict.to_original_html ) } After using the above code, I would expect that the output look like the following exactly like the XML shown above. However to my surprise, the output of the file looks more like this: <dict>\n", " <key>item1</key><value>12345</value>\n", " <key>item2</key><value>67890</value>\n", " <key>item3</key><value>23456</value\n", " </dict> I've tried splitting at the "\n" characters and writing to the file one line at a time, but that didn't seem to work either as it did not recognize the "\n" characters. Any help is greatly appreciated. It might be a very simple solution, but I am having troubling finding it. Thanks!

    Read the article

  • Writing data into New NFC Tag not works?

    - by Nagaraj436
    I am Newbie to NFC Android App Development. I am done with the App development and everything worked fine. As part of my testing I used MifareClassic as well MifareDesfire tags to write and read. I am storing data in Ndef format. Initially I used the above testing tags with other apps like Nxp tagwriter and Tagstand Tagwriter and then I used with My app. So everything worked fine. Even later I used my app to write and read data from Sony Felica tags(new tags) which also worked fine. So I passed app to client for review but I came to know that app is not writing on New Tags. If they are reset from other apps then It works fine. So I done the same test here and found the same issue as client reported. What might be the issue? Has someone come across same kind of issue? Is it required to format before using? if so how to do that? Someone Help to solve the issue. Thanks in Advance.

    Read the article

  • Setting the type of a field in a superclass from a subclass (Java)

    - by Ibolit
    Hi. I am writing a project on Google App Engine, within it I have a number of abstract classes that I hope I will be able to use in my future projects, and a number of concrete classes inheriting from them. Among other abstract classes I have an abstract servlet that does user management, and I hava an abstract user. The AbstractUser has all the necessary fields and methods for storing it in the datastore and telling whether the user is registered with my service or not. It does not implement any project specific functionality. The abstract servlet that manages users, refers only to the methods declared in the AbstractUser class, which allows it to generate links for logging in, logging out and registering (for unregistered users). In order to implement the project-specific user functionality I need to subclass the Abstract user. The servlets I use in my project are all indirect descendants from that abstract user management servlet, and the user is a protected field in it, so the descendant servlets can use it as their own field. However, whenever i want to access any project specific method of the concrete user, i need to cast it to that type. i.e. (abstract user managing servlet) ... AbstractUser user = getUser(); ... abstract protected AbstractUser getUser(); (project-specific abstract servlet) @Override protected AbstractUser getUser() { return MyUserFactory.getUser(); } any other project specific servlet: int a = ((ConcreteUser) user).getA(); Well, what i'd like to do is to somehow make the type of “user” in the superclass depend on something in the project-specific abstract class. Is it at all possible? And i don't want to move all the user-management stuff into a project-specific layer, for i would like to have it for my future projects already written :) Thank you for your help.

    Read the article

  • Reading and writing to files simultaneously?

    - by vipersnake005
    Moved the question here. Suppose, I want to store 1,000,000,000 integers and cannot use my memory. I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Check for existing mapping when writing a custom applier in ConfORM

    - by Philip Fourie
    I am writing my first custom column name applier for ConfORM. How do I check if another column has already been map with same mapping name? This is what I have so far: public class MyColumnNameApplier : IPatternApplier<PropertyPath, IPropertyMapper> { public bool Match(PropertyPath subject) { return (subject.LocalMember != null); } public void Apply(PropertyPath subject, IPropertyMapper applyTo) { string shortColumnName = ToOracleName(subject); // How do I check if the short columnName already exist? applyTo.Column(cm => cm.Name(shortColumnName)); } private string ToOracleName(PropertyPath subject) { ... } } } I need to shorten my class property names to less than 30 characters to fit in with Oracle's 30 character limit. Because I am shortening the column names it is possible that I generate the same name for two different properties. I would like to know when a duplicate mapping occurs. If I don't handle this scenario ConfORM/NHibernate allows two different properties to 'share' the same column name - this is obviously creates a problem for me.

    Read the article

  • Writing own Unix shell in C - Problems with PATH and execv

    - by user1287523
    I'm writing my own shell in C. It needs to be able to display the users current directory, execute commands based on the full path (must use execv), and allow the user to change the directory with cd. This IS homework. The teacher only gave us a basic primer on C and a very brief skeleton on how the program should work. Since I'm not one to give up easily I've been researching how to do this for three days, but now I'm stumped. This is what I have so far: Displays the user's username, computername, and current directory (defaults to home directory). Prompts the user for input, and gets the input Splits the user's input by " " into an array of arguments Splits the environment variable PATH by ":" into an array of tokens I'm not sure how to proceed from here. I know I've got to use the execv command but in my research on google I haven't really found an example I understand. For instance, if the command is bin/ls, how does execv know the display all files/folders from the home directory? How do I tell the system I changed the directory? I've been using this site a lot which has been helpful: http://linuxgazette.net/111/ramankutty.html but again, I'm stumped. Thanks for your help. Let me know if I should post some of my existing code, I'm wasn't sure if it was necessary though.

    Read the article

  • .Net file writing and string splitting issues

    - by sagar
    I have a requirement where the file should be split using a given character. Default splitting options are CRLF and LF In both these cases I am splitting the line by \r\n and \r respectively. Also I have requirement where any size of file should be processed. (Processing is basically inserting the given string in a file at given position). For this I am reading the file in chunk of 1024 bytes. Then I am applying the string.Split() method. Split() method gives options for ignoring white spaces and none. I have to add back these line break characters to the line. for this I am using a binary writer and I am writing the byte array to the new file. Issue:- 1) When line break is CRLF, and the split option is NONE, while spaces are also added in the splitted array. Second option is given (to ignore white spaces) CRLF works properly. 2)Bit ignoring white space option creates other problems, as I am reading the file byte by byte I can't ignore a white space. 3)When line break characters are other than default(e.g. '|', a null value is prepended to the resulting line. Can anybody give solution to my issues?

    Read the article

  • MVC design in Cocoa - are all 3 always necessary? Also: naming conventions, where to put Controller

    - by Nektarios
    I'm new to MVC although I've read a lot of papers and information on the web. I know it's somewhat ambiguous and there are many different interpretations of MVC patterns.. but the differences seem somewhat minimal My main question is - are M, V, and C always going to be necessary to be doing this right? I haven't seen anyone address this in anything I've read. Examples (I'm working in Cocoa/Obj-c although that shouldn't much matter).. 1) If I have a simple image on my GUI, or a text entry field that is just for a user's convenience and isn't saved or modified, these both would be V (view) but there's no M (no data and no domain processing going on), and no C to bridge them. So I just have some aspects that are "V" - seems fine 2) I have 2 different and visible windows that each have a button on them labeled as "ACTIVATE FOO" - when a user clicks the button on either, both buttons press in and change to say "DEACTIVATE FOO" and a third window appears with label "FOO". Clicking the button again will change the button on both windows to "ACTIVATE FOO" and will remove the third "FOO" window. In this case, my V consists of the buttons on both windows, and I guess also the third window (maybe all 3 windows). I definitely have a C, my Controller object will know about these buttons and windows and will get their clicks and hold generic states regarding windows and buttons. However, whether I have 1 button or 10 button, my window is called "FOO" or my window is called "BAR", this doesn't matter. There's no domain knowledge or data here - just control of views. So in this example, I really have "V" and "C" but no "M" - is that ok? 3) Final example, which I am running in to the most. I have a text entry field as my View. When I enter text in this, say a number representing gravity, I keep it in a Model that may do things like compute physics of a ball while taking in to account my gravity parameter. Here I have a V and an M, but I don't understand why I would need to add a C - a controller would just accept the signals from the View and pass it along to the Model, and vice versa. Being as the C is just a pure passthrough, it's really "junk" code and isn't making things any more reusable in my opinion. In most situations, when something changes I will need to change the C and M both in nearly identical ways. I realize it's probably an MVC beginner's mistake to think most situations call for only V and M.. leads me in to next subject 4) In Cocoa / Xcode / IB, I guess my Controllers should always be an instantiated object in IB? That is, I lay all of my "V" components in IB, and for each collection of View objects (things that are related) I should have an instantiated Controller? And then perhaps my Models should NOT be found in IB, and instead only found as classes in Xcode that tie in with Controller code found there. Is this accurate? This could explain why you'd have a Controller that is not really adding value - because you are keeping consistent.. 5) What about naming these things - for my above example about FOO / BAR maybe something that ends in Controller would be the C, like FancyWindowOpeningController, etc? And for models - should I suffix them with like GravityBallPhysicsModel etc, or should I just name those whatever I like? I haven't seen enough code to know what's out there in the wild and I want to get on the right track early on Thank you in advance for setting me straight or letting me know I'm on the right track. I feel like I'm starting to get it and most of what I say here makes sense, but validation of my guessing would help me feel confident..

    Read the article

  • Associating an Object with other Objects and Properties of those Objects

    - by alzoid
    I am looking for some help with designing some functionality in my application. I already have something similar designed but this problem is a little different. Background: In my application we have different Modules. Data in each module can be associated to other modules. Each Module is represented by an Object in our application. Module 1 can be associated with Module 2 and Module 3. Currently I use a factory to provide the proper DAO for getting and saving this data. It looks something like this: class Module1Factory { public static Module1BridgeDAO createModule1BridgeDAO(int moduleid) { switch (moduleId) { case Module.Module2Id: return new Module1_Module2DAO(); case Module.Module3Id: return new Module1_Module3DAO(); default: return null; } } } Module1_Module2 and Module1_Module3 implement the same BridgeModule interface. In the database I have a Table for every module (Module1, Module2, Module3). I also have a bridge table for each module (they are many to many) Module1_Module2, Module1_Module3 etc. The DAO basically handles all code needed to manage the association and retrieve its own instance data for the calling module. Now when we add new modules that associate with Module1 we simply implement the ModuleBridge interface and provide the common functionality. New Development We are adding a new module that will have the ability to be associated with other Modules as well as specific properties of that module. The module is basically providing the user the ability to add their custom forms to our other modules. That way they can collect additional information along with what we provide. I want to start associating my Form module with other modules and their properties. Ie if Module1 has a property Category, I want to associate an instance From data with that property. There are many Forms. If a users creates an instance of Module2, they may always want to also have certain form(s) attached to that Module2 instance. If they create an instance of Module2 and select Category 1, then I may want additional Form(s) created. I prototyped something like this: Form FormLayout (contains the labels and gui controls) FormModule (associates a form with all instances of a module) Form Instance (create an instance of a form to be filled out) As I thought about it I was thinking about making a new FormModule table/class/dao for each Module and Property that I add. So I might have: FormModule1 FormModule1Property1 FormModule1Property2 FormModule1Property3 FormModule1Property4 FormModule2 FormModule3 FormModule3Property1 Then as I did previously, I would use a factory to get the proper DAO for dealing with all of these. I would hand it an array of ids representing different modules and properties and it would return all of the DAOs that I need to call getForms(). Which in turn would return all of the forms for that particular bridge. Some points This will be for a new module so I dont need to expand on the factory code I provided. I just wanted to show an example of what I have done in the past. The new module can be associated with: Other Modules (ie globally for any instance of that module data), Other module properties (ie only if the Module instance has a certian value in one of its properties) I want to make it easy for developers to add associations with other modules and properties easily Can any one suggest any design patterns or strategy's for achieving this? If anything is unclear please let me know. Thank you, Al

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • Binding not writing to datasource on .NET Compact Framework Form -- works on Full Framework

    - by Dave Welling
    I have a problem with a bound user control writing back to it's datasource on a NetCF forms application. The application is too complex to post code, so I made a toy version to show you. I create a form, usercontrol with a combobox, a class (testBind) and another class (TestLookup). I bind a property of the usercontrol ("value") to a property ("selectedValue") on the testBind class. The testBind class implements INotifyPropertyChanged. I create a few fascade methods on the user control to bind the contained combobox to a BindingList(of TestLookup). I create a button to show the value of the testBind bound property (in a MessageBox). The messagebox returns "-1" every time regardless of the combobox entry selected. I can take the EXACT same code, paste it in a full framework Forms app and it will return the correct value of the selected combobox entry. Imports System.ComponentModel Public Class Form2 Inherits Form Private _testBind1 As testBind Private _testUserControlX As UserControlX Friend WithEvents _buttonX As System.Windows.Forms.Button Public Sub New() _buttonX = New System.Windows.Forms.Button _buttonX.Location = New System.Drawing.Point(126, 228) _buttonX.Size = New System.Drawing.Size(70, 21) _testBind1 = New testBind _testUserControlX = New UserControlX() Dim _lookup As New System.ComponentModel.BindingList(Of TestLookup)() _lookup.Add(New TestLookup(1, "text1")) _lookup.Add(New TestLookup(2, "text2")) _testUserControlX.DataSource = _lookup _testUserControlX.DisplayMember = "Text" _testUserControlX.ValueMember = "ID" _testUserControlX.DataBindings.Add("Value", _testBind1, "SelectedID", False, DataSourceUpdateMode.OnValidation) MinimizeBox = False Controls.Add(_testUserControlX) Controls.Add(_buttonX) End Sub Private Sub ButtonX_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles _buttonX.Click MessageBox.Show(_testBind1.SelectedID.ToString()) End Sub Public Class testBind Implements System.ComponentModel.INotifyPropertyChanged Private _selectedRow As Integer = -1 Public Event PropertyChanged(ByVal sender As Object, ByVal e As System.ComponentModel.PropertyChangedEventArgs) Implements System.ComponentModel.INotifyPropertyChanged.PropertyChanged Protected Sub OnPropertyChanged(ByVal PropertyName As String) RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(PropertyName)) End Sub Public Property SelectedID() As Integer Get Return _selectedRow End Get Set(ByVal value As Integer) _selectedRow = value OnPropertyChanged("SelectedID") End Set End Property End Class Public Class TestLookup Private _text As String Private _id As Integer Public Sub New(ByVal id As Integer, ByVal text As String) _text = text _id = id End Sub Public Property ID() As Integer Get Return _id End Get Set(ByVal value As Integer) _id = value End Set End Property Public Property Text() As String Get Return _text End Get Set(ByVal value As String) _text = value End Set End Property End Class End Class Public Class UserControlX Inherits System.Windows.Forms.UserControl Friend WithEvents ComboBox1 As System.Windows.Forms.ComboBox Public Sub New() Me.ComboBox1 = New System.Windows.Forms.ComboBox Me.Controls.Add(Me.ComboBox1) End Sub Public Property Value() As Integer Get Return ComboBox1.SelectedValue End Get Set(ByVal value As Integer) ComboBox1.SelectedValue = value End Set End Property Public Property DataSource() As Object Get Return ComboBox1.DataSource End Get Set(ByVal value As Object) ComboBox1.DataSource = value End Set End Property Public Property ValueMember() As String Get Return ComboBox1.ValueMember End Get Set(ByVal value As String) ComboBox1.ValueMember = value End Set End Property Public Property DisplayMember() As String Get Return ComboBox1.DisplayMember End Get Set(ByVal value As String) ComboBox1.DisplayMember = value End Set End Property End Class

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • Can you find a pattern to sync files knowing only dates and filenames?

    - by Robert MacLean
    Imagine if you will a operating system that had the following methods for files Create File: Creates (writes) a new file to disk. Calling this if a file exists causes a fault. Update File: Updates an existing file. Call this if a file doesn't exist causes a fault. Read File: Reads data from a file. Enumerate files: Gets all files in a folder. Files themselves in this operating system only have the following meta data: Created Time: The original date and time the file was created, by the Create File method. Modified Time: The date and time the file was last modified by the Update File method. If the file has never been modified, this will equal the Create Time. You have been given the task of writing an application which will sync the files between two directories (lets call them bill and ted) on a machine. However it is not that simple, the client has required that The application never faults (see methods above). That while the application is running the users can add and update files and those will be sync'd next time the application runs. Files can be added to either the ted or bill directories. File names cannot be altered. The application will perform one sync per time it is run. The application must be almost entirely in memory, in other words you cannot create a log of filenames and write that to disk and then check that the next time. The exception to point 6 is that you can store date and times between runs. Each date/time is associated with a key labeled A through J (so you have 10 to use) so you can compare keys between runs. There is no way to catch exceptions in the application. Answer will be accepted based on the following conditions: First answer to meet all requirements will be accepted. If there is no way to meet all requirements, the answer which ensures the smallest amount of missed changes per sync will be accepted. A bounty will be created (100 points) as soon as possible for the prize. The winner will be selected one day before the bounty ends. Please ask questions in the comments and I will gladly update and refine the question on those.

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >