Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 26/348 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Google BigQuery - Best Practices for Loading your Data and open Office Hours

    Google BigQuery - Best Practices for Loading your Data and open Office Hours Michael Manoochehri and Ryan Boyd from the DevRel team for cloud data services will be streaming to you live! They'll be discussing how to load your data into BigQuery and the various options available -- from commercial ETL tools to App Engine's Pipeline API and MapReduce frameworks, to simple UNIX command-line tools. They'll then open it up for a general office hours on ingestion and other topics. Please use the moderator link to ask your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Best Practices for Middleware Management

    - by JuergenKress
    This self-paced course teaches you best practices when using Oracle Enterprise Manager Cloud Control 12c for managing your WebLogic and SOA applications and infrastructure. It consists of interactive lectures, videos, review sessions, and optional demonstrations. This course covers Enterprise Manager Cloud Control 12c licensed with the WebLogic Server Management Pack Enterprise Edition and the SOA Management Pack Enterprise Edition. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: EM12c,Enterprise Manager,EM12c training,eductaion,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Good practices when writing a parser for a standard file format (such as ePub)

    - by J-F L-R
    I am considering writing an Android reader software that can read ePubs and display them. I checked the ePub standard documents. However, these contain a lot of information. So I am wondering what is the process of implementing a standard for a file format. What are the steps to get a working implementation without passing by parts of the standard? Are there any best practices? Also, is it even possible to program this alone in a reasonable time? From what I have already found out, ePub is basically a zip archive. That means I could probably use zlib to decompress it. The content is in XHTML and CSS, so I believe it should be possible to display it in a WebView. The parts that are missing are writing the code that can read the metadata and manage the non-standard XHTML extensions.

    Read the article

  • S&OP best practices that can help your organization be more responsive and effective

    - by user717691
    If you want to increase revenue by quickly responding to market changes or want to ensure that your operating plans drive towards corporate financial goals, you need real-time sales and operations planning.Watch the replay of our recent Webcast to hear Christopher Neff from NCR Corporation discuss how NCR Corporation is leveraging Oracle's Real-Time Sales and Operations Planning solutions. Learn best practices that can help your organization be more responsive and effective. Discover how Oracle's comprehensive suite of best-in-class capabilities can: Synchronize plans and actions across the extended enterprise Maximize profits with the ability to sense, influence, and fulfill demand with industry leading demand management and real-time sales & operations Drive tactical decisions into operational planning and execution, while monitoring performance Profitably balance supply, demand, and budgets Move planning processes from periodic and reactive to real-time, iterative and proactive Register now for the on demand Webcast! http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=8664804&src=6811174&Act=99NCR Corporation is a leader in Self Service Solution such as POS Solutions, Payment and Imaging Systems.

    Read the article

  • Best practices for versioning project after dependency upgrade

    - by shabunc
    Say, my project have dependency N with version 1.0.0. Then something have changed, and I should depend on newer version - let it be 1.0.1. OK, I'm incrementing dependency version, nothing else changes in my code. It looks like I should increment my own projects' version, but how exactly I should increment? Should I increment only third number (so-called revision), or best practices here are more complicated. For example, may be, if we are changing projects' dependency minor value, we should do the same thing in the project itself?

    Read the article

  • Why would companies allow these practices?

    - by MuduM
    Here's a situation that usually happens in some companies: Announce interesting product X. Promise a release date. Release on the promised released date, ready or not. Users discover and report defects. Send patch after patch after patch after patch after patch. My question is: Ummm, what could be the factors that would lead them to tolerate these undesirable practices? So, in the name of quality, what can be practically and realistically improved in those practicies? I can think of time constraints, user feedback, sponsors pressuring the company, lack of money.

    Read the article

  • Are there any good references coparing Software Development CM best practices to IT CM best practice

    - by dkackman
    I have spent my career on the software development side of things and in the latter part have become more and more involved in the realm of Software Configuration Management. Now I am moving into an IT group and need to ramp up on CM practices from that standpoint. Are there any good references (books, websites, blogs whatever) out there comparing Software CM practices to IT CM practices? Basically I'm in learning mode and am trying compare things I already know from the software development side to things on the IT side.

    Read the article

  • What are the best practices for writing maintainable CSS?

    - by Art
    I am just starting to explore this area and wonder what are the best practices when it comes to production of clean, well-structured and maintainable CSSes. There seems to be few different approaches to structuring CSS rules. One of the most commonly encountered ones I saw was throwing everything together in one rule, i.e. margins, borders, typefaces, backgrounds, something like this: .my-class { border-top:1px solid #c9d7f1; font-size:1px; font-weight:normal; height:0; position:absolute; top:24px; width:100%; } Another approach I noticed employed grouping of properties, say text-related properties like font-size, typeface, emphasis etc goes into one rule, backgrounds go into other, borders/margins go into yet another one: .my-class { border-top:1px solid #c9d7f1; } .my-class { font-size:1px; font-weight:normal; } .my-class { height:0; top:24px; width:100%; position:absolute; } I guess I am looking for a silver bullet here which I know I am not going to get, bet nevertheless - what are the best practices in this space?

    Read the article

  • SQL SERVER – Find Largest Supported DML Operation – Question to You

    - by pinaldave
    SQL Server is very big and it is not possible to know everything in SQL Server but we all keep learning. Recently I was going over the best practices of transactions log and I come across following statement. The log size must be at least twice the size of largest supported DML operation (using uncompressed data volumes). First of all I totally agree with this statement. However, here is my question – How do we measure the size of the largest supported DML operation? I welcome all the opinion and suggestions. I will combine the list and will share that with all of you with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • How can I become more agile?

    - by dough
    The definition of an agile approach I've adopted is: working to reduce feedback loops, everywhere. I'd describe my Personal Development Process (PDP) as "not very agile" or "not agile enough"! I've adopted TDD, automated building, and time-boxing (using the Pomodoro Technique) as part of my PDP. I find these practices really help me get feedback, review my direction, and catch yak shaving earlier! However, what still escapes me is the ability to reduce feedback time in the ultimate feedback loop; regularly getting working software in front of the end user. Aside from team-oriented practices, what can I do to personally become more agile?

    Read the article

  • Being prepared for a code review as a developer?

    - by Karthik Sreenivasan
    I am looking for some ideas here. I read the article How should code reviews be Carried Out and Code Reviews, what are the advantages? which were very informative but I still need more clarity on the question below. My Question is, Being the target developer, can you suggest some best practices a developer can incorporate before his code is going get reviewed. Currently I practice the following methods PPT for a logical flow Detailed comments. Issue: Even though I have implemented the above practices, they do not help on the review. The problem I faced is, when certain logic is referred, I keep searching for the implementation and the flow and too much time is wasted in the process and I get on people’s nerve. I think a lot of developers would be going through what I am going through as well.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • What is the state of the art in OOP?

    - by Ollie Saunders
    I used to do a lot of object-oriented programming and found myself reading up a lot on how to do it well. When C++ was the dominant OOP language there was a very different set of best practices than have emerged since. Some of the newer ideas I know of are BDD, internal DSLs, and the importing of ideas from functional programming. My question is: is there any consensus on the best way to develop object-oriented software today in the more modern languages such as C#, Ruby, and Python? And what are those practices? For instance, I rather like the idea of stateless objects but how many are actually using that in practice? Or, is the state of the art to deemphasize the importance of OOP? This might be the case for some Python programmers but would be difficult for Rubyists.

    Read the article

  • How to shift development culture from tech fetish to focusing on simplicity and getting things done?

    - by Serge
    Looking for ways to switch team/individual culture from chasing latest fads, patterns, and all kinds of best practices to focusing on finding quickest and simplest solutions and shipping features. My definition of "tech fetish": Chasing latest fads, applying new technologies and best practices without considering product/project impact, focusing on micro optimization, creating platforms and frameworks instead of finding simple and quick ways to ship product features. Few examples of culture differences: From "Spent a day on trying to map database query with five complex joins in NHibernate" to "Wrote a SQL query and used DataReader to pull data in" From "Wrote super-fast JSON parser in C++" to "Used Python to parse JSON response and call C++ code" From "Let's use WCF because it supports all possible communication standards" to "REST is simple text-based format, let's stick with it and use simple HTTP handlers"

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • Patterns to deal with with functions that can have different kinds of results.

    - by KaptajnKold
    Suppose you have an method on an object that given the some input alters the objects state if the input validates according to some complex logic. Now suppose that when the input doesn't validate, it can be due to several different things, each of which we would like to be able to deal with in different ways. I'm sure many of you are thinking: That's what exceptions are for! I've thought of this also. But my reservation against using exceptions is that in some cases there is nothing exceptional about the input not validating and I really would like to avoid using exceptions to control what is really just in the expected flow of the program. If there were only one interpretation possible, I could simply choose to return a boolean value indicating whether or not the operation resulted in a state change or not and the respond appropriately when it did not. There is of course also the option to return a status code which the client can then choose to interpret or not. I don't like this much either because there is nothing semantic about status codes. The solution I have so far is to always check for each possible situation which I am able to handle before I call the method which then returns a boolean to inform the client if the object changed state. This leaves me the flexibility to handle as few or as many as the possible situations as I wish depending on the context I am in. It also has the benefit of making the method I am calling simpler to write. The drawback is that there is quite a lot of duplication in the client code wherever I call the method. Which of these solutions do you prefer and why? What other patterns do people use for providing meaningful feedback from functions? I know that some languages support multiple return values, and I if I had that option I would surely prefer it.

    Read the article

  • Webcast: Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM

    - by Honglin Su
    We announced Oracle VM Blade Cluster Reference Configuration last month, see the blog. The new Oracle VM blade cluster reference configuration can help reduce the time to deploy virtual infrastructure by up to 98 percent when compared to multi-vendor configurations. Customers and partners have shown lots of interests. Join Oracle's experts to learn the best practices for speeding virtual infrastructure deployment with Oracle VM, register the webcast (1/25/2011) here.   Virtualization has already been widely accepted as a means to increase IT flexibility and help IT services align better with changing business needs. The flexibility of a virtualized IT infrastructure enables new applications to be rapidly deployed, capacity to be easily scaled, and IT resources to be quickly redirected. The net result is that IT can bring greater value to the business, making virtualization an obvious win from a business perspective. However, building a virtualized infrastructure typically requires assembling and integrating multiple components (e.g. servers, storage, network, virtualization, and operating systems). This infrastructure must be deployed and tested before applications can even be installed. It can take weeks or months to plan, architect, configure, troubleshoot, and deploy a virtualized infrastructure. The process is not only time-consuming, but also error-prone, making it hard to achieve a timely and profitable return on investment.  Oracle is the only vendor that can offer a fully integrated virtualization infrastructure with all of the necessary hardware and software components. The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components, see the figure below. It enables quick and easy deployment of the virtualized infrastructure using components that have been tested together and are all supported together by Oracle. To learn more about Oracle's virtualization offerings, visit http://oracle.com/virtualization.

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • Updated Batch Best Practices

    - by ACShorten
    The Batch Best Practices whitepaper has been updated and published to My Oracle Support with the latest advice and new facilities. Two of the more interesting updates are: Addition of a Bache Cache Flush - Just like the online, the batch component of the framework caches data. It is now possible to refresh the cache manually using a new batch jobs F1-FLUSH. This is particularly useful if you execute long running threadpool workers across many different batch processes (submitters). New EXTENDED execution mode - A new specialist mode has been introduced for sites that use a large number of submitters (concurrent threads) and are experiencing intermittent communication issues in the threadpoolworker. This mode uses Oracle Coherences Extend mode to allow submitters to be allocated to threadpoolworkers via proxy connections. It differs from CLUSTERED mode in that a submitter can be explicitly allocated to a specific threadpoolworker via a proxy connection. This mode is only used for specific situation and customers should continue to use CLUSTERED mode gemerally. The whitepaper outlines advice for these new facilities and provides advice for existing functionality. The whitepaper is available from My Oracle Support at Doc Id: 836362.1.

    Read the article

  • Best practices for App Idea ownership and shares

    - by JOG
    I am developing apps on my sparetime. I am the sole developer, and two non-programmer friends of mine provide vision, content, algorithms and ideas. We always agree happily on all the features, todos and prioritizations. But naturally, coding it is the biggest part. When selling, we agree on splitting profit equally, that is 33% each. But version 1.0 naturally does not sell much. And I go on to try to make the app more viral. This includes tons of stuff where the others are of little help. Examples: Adding support for sharing, facebook connect, gameifying, letting users add content, home page, support, maintenance, server services to make it easier for to update content. The list is long. Suddenly I will be doing 100% of a lot of work but only "own" a third of the income. My friends may either "fade out" of the project after 1.0, or they continue to contribute, but with less value and I would rather exchange them for more programmers or graphic designers. The effort they made to version 1.0 is worth a lot to the app and I realize I would have never done it without them. But I am doing all the work in the end. It is hard to negotiate about splitting 90, 5, 5 instead of 33% each, because the idea is still theirs. How to solve this? What are the best practices to regard the ownership of the app? What kind of agreements could I make that make it beneficial and motivational for me to continue developing the app?

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Best practices in versioning

    - by Gerenuk
    I develop some scripts for data analysis in a small team. For the moment we use SVN, but not in a very structured way. We haven't even looked how to use branches even though we need this functionality. What do you suggest as the best practice to setup the following system: two code bases (core and plugins) versions can be incompatible to previous scripts sometimes individual features are being developed and not yet finished, while other fixes have to be done urgently to the code In the end we don't deliver the code as a package, but rather place the Python scripts in some directory (with version names?). Some other python script which serves as a configuration choses the desired version, sets the path to these libraries and then starts to import the modules. I saw stable releases to be named "trunk" so I did the same. However, no version numbers yet. Core and plugins are different repositories, however we have to match versions for compatibility. Can you suggest some best practices or reference to ease development and reduce chaos? :) Some suggested GIT. I haven't heard about it, but I'm free to change.

    Read the article

  • What are the best practices for implementing the == operator for a class in C#?

    - by remio
    While implementing an == operator, I have the feeling that I am missing some essential points. Hence, I am searching some best practices around that. Here are some related questions I am thinking about: How to cleanly handle the reference comparison? Should it be implemented through a IEquatable<T>-like interface? Or overriding object.Equals? And what about the != operator? (this list might not be exhaustive).

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >