Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 57/348 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • What is the simplest human readable configuration file format?

    - by Juha
    Current configuration file is as follows: mainwindow.title = 'test' mainwindow.position.x = 100 mainwindow.position.y = 200 mainwindow.button.label = 'apply' mainwindow.button.size.x = 100 mainwindow.button.size.y = 30 logger.datarate = 100 logger.enable = True logger.filename = './test.log' This is read with python to a nested dictionary: { 'mainwindow':{ 'button':{ 'label': {'value':'apply'}, ... }, 'logger':{ datarate: {'value': 100}, enable: {'value': True}, filename: {'value': './test.log'} }, ... } Is there a better way of doing this? The idea is to get XML type of behavior and avoid XML as long as possible. The end user is assumed almost totally computer illiterate and basically uses notepad and copy-paste. Thus the python standard "header + variables" type is considered too difficult. The dummy user edits the config file, able programmers handle the dictionaries. Nested dictionary is chosen for easy splitting (logger does not need or even cannot have/edit mainwindow parameters).

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits

    - by Anand Akela
    Earlier this month at the Oracle Open World 2012, we celebrated the first anniversary of Oracle Enterprise Manager 12c . Early adopters of  Oracle Enterprise manager 12c have benefited from its federated self-service access to complete application stacks, automated provisioning, elastic scalability, metering, and charge-back capabilities. Crimson Consulting Group recently interviewed multiple early adopters of Oracle Enterprise Manager 12c and captured their finding in a white Paper "Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains".  Here is summary of the finding :- On October 25th at 10 AM pacific time, Kirk Bangstad from the Crimson Consulting group will join us in a live webcast and share what learnt from the early adopters of Oracle Enterprise Manager 12c. Don't miss this chance to hear how private clouds could impact your business and ask questions from our experts. Webcast: Real-World Benefits of Private Cloud Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits Date: Thursday, October 25, 2012 Time: 10:00 AM PDT | 1:00 PM EDT Register Today All attendees will receive the White Paper: Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains. Stay Connected Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • How should I study programming languages?

    - by gcc
    I am a student of computer engineering. I have never done any programming before, and as you can understand, I don't know how to study it or how to make my own programs. My English is weak [edited for clarity - ed], and so if you don't like the choices I list, please feel free to provide others. How should I study? How should I learn programming languages? Study completely from a book. Don't study from a book, just try writing code. A mix of the two; study from a book, then try writing code. Study half the book, then write the code by hand on paper. Listed to the teacher, then try to solve general problems (those not from any specific chapter).

    Read the article

  • What are DRY, KISS, SOLID, etc. classified as?

    - by Morgan Herlocker
    Is something like DRY a design pattern, a methodology, or something in between? They do not have specific implementations that could neccessarily be demonstrated(even if you can easily demonstrate a case NOT using something like KISS... see The Daily WTF for a plethora of examples), nor do they fully explain a development process like a methodology generally would. Where does that leave these types of "rule of thumb"'s?

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C# WebForms and Ninject

    - by ipohfly
    I'm re-working on the design of an existing application which is build using WebForms. Currently the plan is to work it into a MVP pattern application while using Ninject as the IoC container. The reason for Ninject to be there is that the boss had wanted a certain flexibility within the system so that we can build in different flavor of business logic in the model and let the programmer to choose which to use based on the client request, either via XML configuration or database setting. I know that Ninject have no need for XML configuration, however I'm confused on how it can help to dynamically inject the dependency into the system? Imagine I have a interface IMember and I need to bind this interface to the class decided by a xml or database configuration at the launch of the application, how can I achieve that?

    Read the article

  • How to deal with tautology in comments?

    - by Tamás Szelei
    Sometimes I find myself in situations when the part of code that I am writing is (or seems to be) so self-evident that its name would be basically repeated as a comment: class Example { /// <summary> /// The location of the update. /// </summary> public Uri UpdateLocation { get; set; }; } (C# example, but please refer to the question as language-agnostic). A comment like that is useless; what am I doing wrong? Is it the choice of the name that is wrong? How could I comment parts like this better? Should I just skip the comment for things like this?

    Read the article

  • OOP vs Frameworks (DRY, Organisation, Readability)

    - by benhowdle89
    In terms of organisation, code-readability and DRY programming, which, between OOP and Frameworks shows more of these 3 attributes? I'm aware that inline, procedural coding is viewed by many as a thing of the past, so which is the best route to take for these two? Just to clarify what i mean by OOP and frameworks From Wikipedia: Object-oriented programming (OOP) is a programming paradigm In computer programming, a software framework is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code, thus providing specific functionality

    Read the article

  • rel="nofollow" SEO impact

    - by Torez
    I saw a technique used where there was a block with three parts: 1. Image (wrapped in an anchor tag) 2. Heading (anchor tag with heading text) 3. Paragraph (regular p tag with synopsis content) e.g. <li class="block"> <a rel="nofollow" class="thumb" href="#"><img src="images/placeholder_service_thumbnail.jpg" alt="" /></a> <a class="h3" href="#"Good SEO Heading</a> <pPellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu...</p> </li> With the image tag there was a rel="nofollow" on the wrapped anchor tag. So the idea is that the users still has the ability to click the image and go to the details page, but the image link does not rank. When users click on the heading text, that is only what ranks for that specific page. Q: Is this the correct approach? Does this even do anything? What is the best practice?

    Read the article

  • Financial institutions build predictive models using Oracle R Enterprise to speed model deployment

    - by Mark Hornick
    See the Oracle press release, Financial Institutions Leverage Metadata Driven Modeling Capability Built on the Oracle R Enterprise Platform to Accelerate Model Deployment and Streamline Governance for a description where a "unified environment for analytics data management and model lifecycle management brings the power and flexibility of the open source R statistical platform, delivered via the in-database Oracle R Enterprise engine to support open standards compliance." Through its integration with Oracle R Enterprise, Oracle Financial Services Analytical Applications provides "productivity, management, and governance benefits to financial institutions, including the ability to: Centrally manage and control models in a single, enterprise model repository, allowing for consistent management and application of security and IT governance policies across enterprise assets Reuse models and rapidly integrate with applications by exposing models as services Accelerate development with seeded models and common modeling and statistical techniques available out-of-the-box Cut risk and speed model deployment by testing and tuning models with production data while working within a safe sandbox Support compliance with regulatory requirements by carrying out comprehensive stress testing, which captures the effects of adverse risk events that are not estimated by standard statistical and business models. This approach supplements the modeling process and supports compliance with the Pillar I and the Internal Capital Adequacy Assessment Process stress testing requirements of the Basel II Accord Improve performance by deploying and running models co-resident with data. Oracle R Enterprise engines run in database, virtually eliminating the need to move data to and from client machines, thereby reducing latency and improving security"

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Is having a class have a handleAction(type) method bad practice?

    - by zhenka
    My web application became a little too complicated to do everything in a controller so I had to build large wrapper classes for ORM models. The possible actions a user can trigger are all similar and after a certain point I realized that the best way to go would be to just have constructor method receive action type as a parameter to take care of the small differences internally, as opposed to either passing many arguments or doing a lot of things in the controller. Is this a good practice? I can't really give details for privacy issues.

    Read the article

  • Is it better to define all routes in the Global.asax than to define separately in the areas?

    - by Matthew Patrick Cashatt
    I am working on a MVC 4 project that will serve as an API layer of a larger application. The developers that came before me set up separate Areas to separate different API requests (i.e Search, Customers, Products, and so forth). I am noticing that each Area has separate Area registration classes that define routes for that area. However, the routes defined are not area-specific (i.e. {controller}/{action}/{id} might be defined redundantly in a couple of areas). My instinct would be to move all of these route definitions to a common place like the Global.asax to avoid redundancy and collisions, but I am not sure if I am correct about that.

    Read the article

  • Generalise variable usage inside code

    - by Shirish11
    I would like to know if it is a good practice to generalize variables (use single variable to store all the values). Consider simple example Strings querycre,queryins,queryup,querydel; querycre = 'Create table XYZ ...'; execute querycre ; queryins = 'Insert into XYZ ...'; execute queryins ; queryup = 'Update XYZ set ...'; execute queryup; querydel = 'Delete from XYZ ...'; execute querydel ; and Strings query; query= 'Create table XYZ ... '; execute query ; query= 'Insert into XYZ ...'; execute query ; query= 'Update XYZ set ...'; execute query ; query= 'Delete from XYZ ...'; execute query ; In first case I use 4 strings each storing data to perform the actions mentioned in their suffixes. In second case just 1 variable to store all kinds the data. Having different variables makes it easier for someone else to read and understand it better. But having too many of them makes it difficult to manage. Also does having too many variables hamper my performance?

    Read the article

  • Is using ELSE bad programming?

    - by dave.b
    I've often come across bugs that have been caused by using the ELSE construct. A prime example is something along the lines of: If (passwordCheck() == false){ displayMessage(); }else{ letThemIn(); } To me this screams security problem. I know that passwordCheck is likely to be a boolean, but I wouldn't place my applications security on it. What would happen if its a string, int etc? I usually try to avoid using ELSE, and instead opt for two completely separate IF statements to test for what I expect. Anything else then either gets ignored OR is specifically handled. Surely this is a better way to prevent bugs / security issues entering your app. How do you guys do it?

    Read the article

  • What use is a Business Logic Layer (BLL)?

    - by Andrew S. Arnold
    In reading up on good practice for database applications I've frequently come across advocates of so-called "business logic layers" and I'm trying to decide if it's best for my project to use one (it's a small personal project). My issue lies in the fact that I can't think of anything for the BLL to do that the DAL can't already handle (executing queries and mapping results to objects), so my BLL just calls the DAL without doing anything itself. Maybe I'm wrong about exactly what the DAL should be doing too. But regardless, what sorts of functionality should be expected of a BLL in a database management application?

    Read the article

  • Absolute Top Programming Tips [closed]

    - by Eric
    I'm very intersted in the stuff that REALLY makes a critical difference to career in programming, other than intrinsic stuff like how smart your are, where you were born, etc... Some ideas: 1) Best approach to managing small, medium, and large teams. 2) Most important books to read. 3) Most important skills to know. 4) Correct balance of learning theory vs. just writing code. 5) A good approach to estimating time and cost of a project. 6) Etc... Please limit your answers. If you see somebody has already written your idea, please just vote for their response. I'd like to see what the community thinks are the true indicators of a successful career in our field.

    Read the article

  • Is there a good example of the difference between practice and theory?

    - by a_person
    There has been a lot of posters advising that the best way to retain knowledge is to apply it practically. After ignoring said advice for several years in a futile attempt to accumulate enough theoretical knowledge to be prepared for every possible case scenario, the process which lead me to assembling a library that's easily worth ~6K, I finally get it. I would like to share my story in the hopes that others will avoid taking the same route that was taken by me. I've selected graphical format (photos with caption to be exact) as my media. Help me with your ideas, maybe a fragment of code, or other imagery that would convey a message of the inherent difference between practice and theory.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >