Search Results

Search found 14297 results on 572 pages for 'self tracking entities'.

Page 176/572 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Getting the last member of a group on an intermediary M2M

    - by rh0dium
    If we look at the existing docs, what is the best way to get the last member added? This is similar to this but what I want to do is to be able to do this. group = Group.objects.get(id=1) group.get_last_member_added() #This is by ('-date_added') <Person: FOO> I think the best way is through a manager but how do you do this on an intermediary model? class Person(models.Model): name = models.CharField(max_length=128) def __unicode__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') def __unicode__(self): return self.name class Membership(models.Model): person = models.ForeignKey(Person) group = models.ForeignKey(Group) date_joined = models.DateField() invite_reason = models.CharField(max_length=64)

    Read the article

  • UITableView cell selection disabled and UITableViewHeader Enabled

    - by andyPaul
    I am adding a my header view to self.tableView.tableHeaderView=headerView; This tableView has 10 cells. I want to disable the cell Selection but, headerView touch events must be enabled. To achieve this I added the following code: self.tableView.userInteractionDisabled=YES; self.headerView.userInteractionDisabled=NO; self.headerView.exclusiveTouch=YES; Where I am wrong? Basic idea of implementation is , If headerView is enabled then cell selection is disabled and vice-versa.

    Read the article

  • Python OOP - object has no attribute

    - by user1744269
    I am attempting to learn how to program. I really do want to learn how to program; I love the building and design aspect of it. However, in Java and Python, I have tried and failed with programs as they pertain to objects, classes, methods.. I am trying to develop some code for a program, but im stumped. I know this is a simple error. However I am lost! I am hoping someone can guide me to a working program, but also help me learn (criticism is not only expected, but APPRECIATED). class Converter: def cTOf(self, numFrom): numFrom = self.numFrom numTo = (self.numFrom * (9/5)) + 32 print (str(numTo) + ' degrees Farenheit') return numTo def fTOc(self, numFrom): numFrom = self.numFrom numTo = ((numFrom - 32) * (5/9)) return numTo convert = Converter() numFrom = (float(input('Enter a number to convert.. '))) unitFrom = input('What unit would you like to convert from.. ') unitTo = input('What unit would you like to convert to.. ') if unitFrom == ('celcius'): convert.cTOf(numFrom) print(numTo) input('Please hit enter..') if unitFrom == ('farenheit'): convert.fTOc(numFrom) print(numTo) input('Please hit enter..')

    Read the article

  • How to take a layer's snapshot with mask layer?

    - by OpenThread
    I added a mask to UIView's layer: CGImageRef maskImageRef = [UIImage imageNamed:"Icon.png"].CGImage; CALayer maskLayer = [CALayer layer]; maskLayer.contents = (__bridge id)maskImageRef; self.layer.mask = maskLayer; Then I use this code to get snapshot from a UIView: [self.layer renderInContext:mainViewContentContext]; But the mask wasn't drawn. How to draw self.layer with mask? Special thx!

    Read the article

  • Subclassing Cocoa means no subclass ivars in some cases?

    - by Michael
    The question is generally coming from self = [super init]. In case if I'm subclassing NSSomething and in my init's method self = [super init] returns object of different class, does it mean I am not able to have my very own ivars in my subclass, just because self will be pointing to different class? Appreciate if you could bring some examples if my statement is wrong.

    Read the article

  • Timed selector never performed

    - by sudo rm -rf
    I've added an observer for my method: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(closeViewAfterUpdating) name:@"labelUpdatedShouldReturn" object:nil]; Then my relevant methods: -(void)closeViewAfterUpdating; { NSLog(@"Part 1 called"); [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil afterDelay:2.0]; } -(void)closeViewAfterUpdating2; { NSLog(@"Part 2 called"); [self dismissModalViewControllerAnimated:YES]; } The only reason why I've split this method into two parts is so that I can have a delay before the method is fired. The problem is, the second method is never called. My NSLog output shows Part 1 called, but it never fires part 2. Any ideas? EDIT: I'm calling the notification from a background thread, does that make a difference by any chance? Here's how I'm creating my background thread: [NSThread detachNewThreadSelector:@selector(getWeather) toTarget:self withObject:nil]; and in getWeather I have: [[NSNotificationCenter defaultCenter] postNotificationName:@"updateZipLabel" object:textfield.text]; Also, calling: [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil]; does work. EDITx2: I fixed it. Just needed to post the notification in my main thread and it worked just fine.

    Read the article

  • How do I filter values in a Django form using ModelForm?

    - by malandro95
    I am trying to use the ModelForm to add my data. It is working well, except that the ForeignKey dropdown list is showing all values and I only want it to display the values that a pertinent for the logged in user. Here is my model for ExcludedDate, the record I want to add: class ExcludedDate(models.Model): date = models.DateTimeField() reason = models.CharField(max_length=50) user = models.ForeignKey(User) category = models.ForeignKey(Category) recurring = models.ForeignKey(RecurringExclusion) def __unicode__(self): return self.reason Here is the model for the category, which is the table containing the relationship that I'd like to limit by user: class Category(models.Model): name = models.CharField(max_length=50) user = models.ForeignKey(User, unique=False) def __unicode__(self): return self.name And finally, the form code: class ExcludedDateForm(ModelForm): class Meta: model = models.ExcludedDate exclude = ('user', 'recurring',) How do I get the form to display only the subset of categories where category.user equals the logged in user?

    Read the article

  • How to make sure an action completes before you continue

    - by HurkNburkS
    I am trying to close a UIView thats in one method from another method by calling it, The UIView closes fine but not untill after all of the processes are finished in the current method. I would like to know if there is a way to force the first thing to happen first (i.e. close UIviews) then continue the current method? This is what my method looks like - (void)selectDeselectAllPressed:(UIButton*)button { int id = button.tag; [SVProgressHUD showWithStatus:@"Updating" maskType:SVProgressHUDMaskTypeGradient]; [self displaySelected]; // removes current view so you can load hud will not be behind it if (id == 1) { [self selectAllD]; } else if (id == 2) { [self deselectAllD]; } else if (id == 3) { [self selectAllI]; } else if (id == 4) { [self deselectAllI]; } } as you can see what happens is this method is called when a button is pressed, I would like for the displaySelected method to do what it needs to do before any of the other methods are called? Currently what happes when i debug this is displaySelected method is called the thread walks through that then continues to the if statment then after the method in the if statment has finished then displaySelected changes are made... its so weird. any help would be greatly appreciated.

    Read the article

  • Why does gcc warn about incompatible struct assignment with a `self = [super initDesignatedInit];' c

    - by gavinbeatty
    I have the following base/derived class setup in Objective-C: @interface ASCIICodeBase : NSObject { @protected char code_[4]; } - (Base *)initWithASCIICode:(const char *)code; @end @implementation ASCIICodeBase - (ASCIICodeBase *)initWithCode:(const char *)code len:(size_t)len { if (len == 0 || len > 3) { return nil; } if (self = [super init]) { memset(code_, 0, 4); strncpy(code_, code, 3); } return self; } @end @interface CountryCode : ASCIICodeBase - (CountryCode *)initWithCode:(const char *)code; @end @implementation CountryCode - (CountryCode *)initWithCode:(const char *)code { size_t len = strlen(code); if (len != 2) { return nil; } self = [super initWithCode:code len:len]; // here return self; } @end On the line marked "here", I get the following gcc warning: warning: incompatible Objective-C types assigning 'struct ASCIICodeBase *', expected 'struct CurrencyCode *' Is there something wrong with this code or should I have the ASCIICodeBase return id? Or maybe use a cast on the "here" line?

    Read the article

  • How does Ruby's Array.| compare elements for equality?

    - by Max Howell
    Here's some example code: class Obj attr :c, true def == that p '==' that.c == self.c end def <=> that p '<=>' that.c <=> self.c end def equal? that p 'equal?' that.c.equal? self.c end def eql? that p 'eql?' that.c.eql? self.c end end a = Obj.new b = Obj.new a.c = 1 b.c = 1 p [a] | [b] It prints 2 objects but it should print 1 object. None of the comparison methods get called. How is Array.| comparing for equality?

    Read the article

  • Is there a way to update the height of a single UITableViewCell, without recalculating the height for every cell?

    - by Chris Vasselli
    I have a UITableView with a few different sections. One section contains cells that will resize as a user types text into a UITextView. Another section contains cells that render HTML content, for which calculating the height is relatively expensive. Right now when the user types into the UITextView, in order to get the table view to update the height of the cell, I call [self.tableView beginUpdates]; [self.tableView endUpdates]; However, this causes the table to recalculate the height of every cell in the table, when I really only need to update the single cell that was typed into. Not only that, but instead of recalculating the estimated height using tableView:estimatedHeightForRowAtIndexPath:, it calls tableView:heightForRowAtIndexPath: for every cell, even those not being displayed. Is there any way to ask the table view to update just the height of a single cell, without doing all of this unnecessary work? Update I'm still looking for a solution to this. As suggested, I've tried using reloadRowsAtIndexPaths:, but it doesn't look like this will work. Calling reloadRowsAtIndexPaths: with even a single row will still cause heightForRowAtIndexPath: to be called for every row, even though cellForRowAtIndexPath: will only be called for the row you requested. In fact, it looks like any time a row is inserted, deleted, or reloaded, heightForRowAtIndexPath: is called for every row in the table cell. I've also tried putting code in willDisplayCell:forRowAtIndexPath: to calculate the height just before a cell is going to appear. In order for this to work, I would need to force the table view to re-request the height for the row after I do the calculation. Unfortunately, calling [self.tableView beginUpdates]; [self.tableView endUpdates]; from willDisplayCell:forRowAtIndexPath: causes an index out of bounds exception deep in UITableView's internal code. I guess they don't expect us to do this. I can't help but feel like it's a bug in the SDK that in response to [self.tableView endUpdates] it doesn't call estimatedHeightForRowAtIndexPath: for cells that aren't visible, but I'm still trying to find some kind of workaround. Any help is appreciated.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • How could you parallelise a 2D boids simulation

    - by Sycren
    How could you program a 2D boids simulation in such a way that it could use processing power from different sources (clusters, gpu). In the above example, the non-coloured particles move around until they cluster (yellow) and stop moving. The problem is that all the entities could potentially interact with each other although an entity in the top left is unlikely to interact with one in the bottom right. If the domain was split into different segments, it may speed the whole thing up, But if an entity wanted to cross into another segment there may be problems. At the moment this simulation works with 5000 entities with a good frame rate, I would like to try this with millions if possible. Would it be possible to use quad trees to further optimise this? Any other suggestions?

    Read the article

  • Creating Entity as an aggregation

    - by Jamie Dixon
    I recently asked about how to separate entities from their behaviour and the main answer linked to this article: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ The ultimate concept written about here is that of: OBJECT AS A PURE AGGREGATION. I'm wondering how I might go about creating game entities as pure aggregation using C#. I've not quite grasped the concept of how this might work yet. (Perhaps the entity is an array of objects implementing a certain interface or base type?) My current thinking still involves having a concrete class for each entity type that then implements the relevant interfaces (IMoveable, ICollectable, ISpeakable etc). How can I go about creating an entity purely as an aggregation without having any concrete type for that entity?

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • Objective-C Moving UIView along a curved path

    - by PruitIgoe
    I'm not sure if I am approaching this the correct way. In my app, when a user touches the screen I capture the point and create an arc from a fixed point to that touch point. I then want to move a UIView along that arc. Here's my code: ViewController.m //method to "shoot" object - KIP_Projectile creates the arc, KIP_Character creates the object I want to move along the arc ... //get arc for trajectory KIP_Projectile* vThisProjectile = [[KIP_Projectile alloc] initWithFrame:CGRectMake(51.0, fatElvisCenterPoint-30.0, touchPoint.x, 60.0)]; vThisProjectile.backgroundColor = [UIColor clearColor]; [self.view addSubview:vThisProjectile]; ... KIP_Character* thisChar = [[KIP_Character alloc] initWithFrame:CGRectMake(51, objCenterPoint-5, imgThisChar.size.width, imgThisChar.size.height)]; thisChar.backgroundColor = [UIColor clearColor]; thisChar.charID = charID; thisChar.charType = 2; thisChar.strCharType = @"Projectile"; thisChar.imgMyImage = imgThisChar; thisChar.myArc = vThisProjectile; [thisChar buildImage]; [thisChar traceArc]; in KIP_Projectile I build the arc using this code: - (CGMutablePathRef) createArcPathFromBottomOfRect : (CGRect) rect : (CGFloat) arcHeight { CGRect arcRect = CGRectMake(rect.origin.x, rect.origin.y + rect.size.height - arcHeight, rect.size.width, arcHeight); CGFloat arcRadius = (arcRect.size.height/2) + (pow(arcRect.size.width, 2) / (8*arcRect.size.height)); CGPoint arcCenter = CGPointMake(arcRect.origin.x + arcRect.size.width/2, arcRect.origin.y + arcRadius); CGFloat angle = acos(arcRect.size.width / (2*arcRadius)); CGFloat startAngle = radians(180) + angle; CGFloat endAngle = radians(360) - angle; CGMutablePathRef path = CGPathCreateMutable(); CGPathAddArc(path, NULL, arcCenter.x, arcCenter.y, arcRadius, startAngle, endAngle, 0); return path; } - (void)drawRect:(CGRect)rect { CGContextRef currentContext = UIGraphicsGetCurrentContext(); _myArcPath = [self createArcPathFromBottomOfRect:self.bounds:30.0]; CGContextSetLineWidth(currentContext, 1); CGFloat red[4] = {1.0f, 0.0f, 0.0f, 1.0f}; CGContextSetStrokeColor(currentContext, red); CGContextAddPath(currentContext, _myArcPath); CGContextStrokePath(currentContext); } Works fine. The arc is displayed with a red stroke on the screen. In KIP_Character, which has been passed it's relevant arc, I am using this code but getting no results. - (void) traceArc { CGMutablePathRef myArc = _myArc.myArcPath; // Set up path movement CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:@"position"]; pathAnimation.calculationMode = kCAAnimationPaced; pathAnimation.fillMode = kCAFillModeForwards; pathAnimation.removedOnCompletion = NO; pathAnimation.path = myArc; CGPathRelease(myArc); [self.layer addAnimation:pathAnimation forKey:@"savingAnimation"]; } Any help here would be appreciated.

    Read the article

  • Does placing Google Analytics code in an external file affect statistics?

    - by Jacob Hume
    I'm working with an outside software vendor to add Google Analytics code to their web app, so that we can track its usage. Their developer suggested that we place the code in an external ".js" file, and he could include that in the layout of his application. The StackOverflow question "Google Analytics: External .js file covers the technical aspect, so apparently tracking is possible via an external file. However, I'm not quite satisfied that this won't have negative implications. Does including the tracking code as an external file affect the statistics collected by Google?

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Cooperator Framework

    - by csharp-source.net
    Cooperator Framework is a base class library for high performance Object Relational Mapping (ORM), and a code generation tool that aids agile application development for Microsoft .Net Framework 2.0/3.0. The main features are: * Use business entities. * Full typed Model (Data Layer and Entities) * Maintain persistence across the layers by passing specific types( .net 2.0/3.0 generics) * Business objects can bind to controls in Windows Forms and Web Forms taking advantage of data binding of Visual Studio 2005. * Supports any Primary Key defined on tables, with no need to modify it or to create a unique field. * Uses stored procedures for data access. * Supports concurrency. * Generates code both for stored procedures and projects in C# or Visual Basic. * Maintains the model in a repository, which can be modified in any stage of the development cycle, regenerating the model on demand.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >