Search Results

Search found 10046 results on 402 pages for 'repository pattern'.

Page 114/402 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • C# 4.0: Alternative To Optional Arguments

    - by Paulo Morgado
    Like I mentioned in my last post, exposing publicly methods with optional arguments is a bad practice (that’s why C# has resisted to having it, until now). You might argument that your method or constructor has to many variants and having ten or more overloads is a maintenance nightmare, and you’re right. But the solution has been there for ages: have an arguments class. The arguments class pattern is used in the .NET Framework is used by several classes, like XmlReader and XmlWriter that use such pattern in their Create methods, since version 2.0: XmlReaderSettings settings = new XmlReaderSettings(); settings.ValidationType = ValidationType.Auto; XmlReader.Create("file.xml", settings); With this pattern, you don’t have to maintain a long list of overloads and any default values for properties of XmlReaderSettings (or XmlWriterSettings for XmlWriter.Create) can be changed or new properties added in future implementations that won’t break existing compiled code. You might now argue that it’s too much code to write, but, with object initializers added in C# 3.0, the same code can be written like this: XmlReader.Create("file.xml", new XmlReaderSettings { ValidationType = ValidationType.Auto }); Looks almost like named and optional arguments, doesn’t it? And, who knows, in a future version of C#, it might even look like this: XmlReader.Create("file.xml", new { ValidationType = ValidationType.Auto });

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Wrong perspective is showing in Eclipse plugin project [closed]

    - by Arun Kumar Choudhary
    I am working in Eclipse Modeling Framework (Eclipse plugin development) in my project the tool(project i am working) provides three perspectives. 1.Accelerator Analyst perspective 2.Contract Validation and 3.Underwriter rules Editor... By default it starts with Contract validation perspective (As we define it within the plugin_customization.ini). However after switching to other perspective does not change the perspective shown... As all perspective (Class, Id and Name) is define only inside Plugin.XML as it is the task of org.eclipse.ui.perspective that that perspective name should be come forefront. Out of 10 7 times it is working fine but I am not getting why this is not working in that 3 cases. I am pasting my plugin.XML file <?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.0"?> <plugin> <extension id="RuleEditor.application" name="Accelerator Tooling" point="org.eclipse.core.runtime.applications"> <application> <run class="com.csc.fs.underwriting.product.UnderWritingApplication"> </run> </application> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.ContractValidationPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.ContractValidationPerspective" name="Contract Validation"> </perspective> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.UnderwritingPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.UnderwritingPerspective" name="Underwriting"> </perspective> </extension> <extension id="product" point="org.eclipse.core.runtime.products"> <product application="com.csc.fs.nba.underwriting.application.RuleEditor.application" name="Rule Configurator Workbench" description="%AppName"> <property name="introTitle" value="Welcome to Accelerator Tooling"/> <property name="introVer" value="%version"/> <property name="introBrandingImage" value="product:csclogo.png"/> <property name="introBrandingImageText" value="CSC FSG"/> <property name="preferenceCustomization" value="plugin_customization.ini"/> <property name="appName" value="Rule Configurator Workbench"> </property> </product> </extension> <extension point="org.eclipse.ui.intro"> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" icon="icons/Welcome.gif" id="com.csc.fs.nba.underwriting.intro"/> <introProductBinding introId="com.csc.fs.nba.underwriting.intro" productId="com.csc.fs.nba.underwriting.application.product"/> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" id="com.csc.fs.nba.underwriting.application.intro"> </intro> <introProductBinding introId="com.csc.fs.nba.underwriting.application.intro" productId="com.csc.fs.nba.underwriting.application.product"> </introProductBinding> </extension> <extension name="Accelerator Tooling" point="org.eclipse.ui.intro.config"> <config content="$nl$/intro/introContent.xml" id="org.eclipse.platform.introConfig.mytest" introId="com.csc.fs.nba.underwriting.intro"> <presentation home-page-id="news"> <implementation kind="html" os="win32,linux,macosx" style="$nl$/intro/css/shared.css"/> </presentation> </config> <config content="introContent.xml" id="com.csc.fs.nba.underwriting.application.introConfigId" introId="com.csc.fs.nba.underwriting.application.intro"> <presentation home-page-id="root"> <implementation kind="html" os="win32,linux,macosx" style="content/shared.css"> </implementation> </presentation> </config> </extension> <extension point="org.eclipse.ui.intro.configExtension"> <theme default="true" id="org.eclipse.ui.intro.universal.circles" name="%theme.name.circles" path="$nl$/themes/circles" previewImage="themes/circles/preview.png"> <property name="introTitle" value="Accelerator Tooling"/> <property name="introVer" value="%version"/> </theme> </extension> <extension point="org.eclipse.ui.ide.resourceFilters"> <filter pattern="*.dependency" selected="true"/> <filter pattern="*.producteditor" selected="true"/> <filter pattern="*.av" selected="true"/> <filter pattern=".*" selected="true"/> </extension> <extension point="org.eclipse.ui.splashHandlers"> <splashHandler class="com.csc.fs.nba.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.nba.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandler class="com.csc.fs.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandlerProductBinding productId="com.csc.fs.nba.underwriting.application" splashId="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandlerProductBinding> </extension> <extension id="com.csc.fs.pa.security" point="com.csc.fs.pa.security.implementation.secure"> <securityImplementation class="com.csc.fs.pa.security.PASecurityImpl"> </securityImplementation> </extension> <extension id="productApplication.security.pep" name="com.csc.fs.pa.producteditor.application.security.pep" point="com.csc.fs.pa.security.implementation.authorize"> <authorizationManager class="com.csc.fs.pa.security.authorization.PAAuthorizationManager"> </authorizationManager> </extension> <extension point="org.eclipse.ui.editors"> <editor class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> Inside each class(the qualified classes in above xml) i did only hide and show the view according to perspective and that is working very fine.. Please provide any method that Eclipse provide so that I can override it in each classed so that it can work accordingly.

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • CQRS without using others patterns

    - by John Smith
    I would like to explain CQRS to my team of developers. I just can't figure out how to explain it in the simplest way so they can implement the pattern rapidly without any others frameworks. I've read a lot of resources including video and articles but I don't find how to implement CQRS without using others patterns like a service Bus, event sourcing pattern, domain driven design. I know the purpose of these pattern but for the first step, I don't want them to think CQRS and theses patterns must be tied together. My first idea is to say that CQRS is about separating the read part and the write part. The read part is composed only of the UI project, and DAL project. Then the write part is composed of a typical multilayer architecture: UI/BLL/DAL. Then, does CQRS say we must also have two datastore ? What about the notion of commands which reveal the user's intention, is it also something part of CQRS or DDD ? Basically, how to implement CQRS without using others patterns. I concede it's also not that clear in my mind because I've used to work with NCQRS/DDD/Event Sourcing/ServiceBus in my personal project. Thanks

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Are there open source alternatives to Bitbucket, Github, Kiln, and similar DVCS browsing and management tools?

    - by Ryan Taylor
    I am aware of several tools/services that provide DVCS browsing and management such as Bitbucket, Github, Kiln, SCM-Manager and Rhodecode. However, the use case I am considering is one such that: Any source code must reside on an employers internal servers. The solution must be open source. It should provide a Bitbucket or Github like experience, including a project wiki, repository browsing and management, and social coding aspects such as code review. The solution should have mercurial support (if not support for other DVCSs). Of these, only SCM-Manager and RhodeCode come close as they can be installed on your own servers and are open source. However they do not have the Bitbucket or Github experience. There is no issue tracker or wiki and the UI, while functional, is not up to par with Github or Bitbucket. I can get close with Trac or Redmine with their repository browsers but unfortunately they do not have any repository management capabilities. Are there other open source tools out there that would provide a similar experience to Bitbucket, Github or Kiln?

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • New features for Expression Blend 4 Release Candidate

    - by kaleidoscope
    With Microsoft Expression Blend 4, you can create websites and applications based on Microsoft Silverlight 3 and Microsoft Silverlight 4, and desktop applications based on Windows Presentation Foundation (WPF) 3.5 with Service Pack 1 (SP1) and WPF4. Expression Blend provides new support for prototyping, interactivity through behaviors, special Silverlight functionality, and on-the-fly sample data generation. Expression Blend includes new behaviors that are quickly and easily configured Expression Blend offers new sample data, behaviors, and features of project templates to support the Model-View-ViewModel (MVVM) pattern The MVVM pattern is a way to structure a Silverlight or WPF application so that user interface (UI) objects are as decoupled as possible from the application's data and behavior. This makes it easier for design tasks and development tasks to be performed independently and without breaking each other. Essentially, your UI is the View. You bind objects in the View to properties and commands of the ViewModel, and the View can also call methods on the ViewModel. Compatible with Silverlight 3 and WPF 3.5 with Service Pack 1 (SP1) Interoperate able with Visual Studio. Included New Shapes: The Assets panel in Expression Blend contains a new Shapes category, including presets for the easy creation of arcs, arrows, callouts, and polygons. New Controls: Expression Blend has tooling support for the RichTextBox control in Silverlight. XAML cleanliness :Expression Blend generates less XAML with respect to animations and animation-related properties. MVVM project template: Expression Blend includes a new project template that offers a basic starting point for Model-View-ViewModel pattern applications. Run project with CTRL+F5:To improve consistency with Visual Studio, you can now invoke the Run Project command by pressing either CTRL+F5 or F5 Technorati Tags: Rituraj,Features of Expression Blend4 RC

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • JavaScript Class Patterns Revisited: Endgame

    - by Liam McLennan
    I recently described some of the patterns used to simulate classes (types) in JavaScript. But I missed the best pattern of them all. I described a pattern I called constructor function with a prototype that looks like this: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); and I mentioned that the problem with this pattern is that it does not provide any encapsulation, that is, it does not allow private variables. Jan Van Ryswyck recently posted the solution, obvious in hindsight, of wrapping the constructor function in another function, thereby allowing private variables through closure. The above example becomes: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Now we have prototypal inheritance and encapsulation. The important thing to understand is that the constructor, and the toString function both have access to the name and age private variables because they are in an outer scope and they become part of the closure.

    Read the article

  • Remap keyboard Ubuntu 12.04; Asus Q500A

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; linux kernel 3.8.0-32-generic. I am using gnome-panel, and xserver-xorg-lts-raring. I have been experiencing problems with the keyboard short-cuts since I had a fresh install. fn+f10 is supposed to mute my system, but instead it will repeatedly press d. fn+f11 is volume down, but it presses c. fn+f12 is volume up, presses b repeatedly. Most of the other on-board short-cuts such as adjusting screen and led brightness work most of the time, but sometimes press other letters repeatedly. Also, sometimes my cntr gets held down for no reason. Everything works fine in windows. I have tried installing all recommends and sudo dpkg-reconfigure -a to reconfigure all packages, which did not solve my problem. I have tried using KeyTouch editor to edit keymaps, navigating to /usr/shar/x11/xkb/keymap when I try opening any of these files it says file contains no keyboard element. I think If I were just able to remap my keyboard it might solve my issues, otherwise if anyone knows where I can get asus drivers for 12.04 please let me know Apparently I didn't have all repositories enabled. I executed the following commands and am trying the updates they give me. Getting linux_kernel 3.8.0-33 generic as well as a bunch of other packages. sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse" sudo add-apt-repository "deb http://archive.canonical.com/ubuntu $(lsb_release -sc) partner"

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • How does this ruby error handling module code work

    - by Michael Durrant
    Trying to get a better handle on ruby exception handling. I have this code (from a book): def err_with_msg(pattern) m = Module.new (class << m; self; end).instance_eval do define_method(:===) do |e| pattern === e.msg end end m end So ok this is a method. We're creating a new Module. I think of module as mix-ins. Not sure what it's doing here. Not we add the module to the class. Fair enough. Then we have self on its own. What that for? I guess we have a little anonymouse method this is just about self. hmmm ok, now for each of the above, check the pattern match. but for each, I thought the above for for a new Module, did the module get to use instance's by being included? A better explanation of what's going on here would be most helpful.

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

  • Ubuntu 12.04 stopped recognizing my BenQ monitor and reduced resolution to 1024x768

    - by Omri
    A few days ago I installed Ubuntu 12.04 32bit Desktop. It recognized my hardware without a problem (at least that I know of) and all worked fine. I left my system running (it is at work) through the night because it is also working as a database server and when I came today to work the resolution was 1024x768 (the monitor recommends 1920x1080) even though in the Display section of the System Settings it was recognized as BenQ, and no higher resolution was offered. After a restart, the monitor name changed from BenQ to Unknown. This is a desktop computer. I also installed gtk-redshift and f.lux. I checked Additional Drivers to see if there is something I can install but it didn't find anything. I tried to Google it but I didn't find anything about a monitor stopping being recognized after it was already working. I did enable some PPAs yesterday, namely webupd8, mozillateam/thunderbird-stable and some other, and I also followed the instructions to patch the NotifyOSD to be more friendly: sudo add-apt-repository ppa:caffeine-developers/ppa sudo add-apt-repository ppa:leolik/leolik sudo apt-get update sudo apt-get upgrade sudo apt-get install libnotify-bin pkill notify-osd sudo add-apt-repository ppa:nilarimogard/webupd8 sudo apt-get update sudo apt-get install notifyosdconfig I now purged both caffeine-developers and leolik PPAs in the hope it will help, but no change. Has there been a change in the packages that could introduce this problem? Any help will be very appreciated :-) Omri

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Design for object with optional and modifiable attributtes?

    - by Ikuzen
    I've been using the Builder pattern to create objects with a large number of attributes, where most of them are optional. But up until now, I've defined them as final, as recommended by Joshua Block and other authors, and haven't needed to change their values. I am wondering what should I do though if I need a class with a substantial number of optional but non-final (mutable) attributes? My Builder pattern code looks like this: public class Example { //All possible parameters (optional or not) private final int param1; private final int param2; //Builder class public static class Builder { private final int param1; //Required parameters private int param2 = 0; //Optional parameters - initialized to default //Builder constructor public Builder (int param1) { this.param1 = param1; } //Setter-like methods for optional parameters public Builder param2(int value) { param2 = value; return this; } //build() method public Example build() { return new Example(this); } } //Private constructor private Example(Builder builder) { param1 = builder.param1; param2 = builder.param2; } } Can I just remove the final keyword from the declaration to be able to access the attributes externally (through normal setters, for example)? Or is there a creational pattern that allows optional but non-final attributes that would be better suited in this case?

    Read the article

  • TODO Formatting

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott TODO's should only be used for a short period of time to remind you that something needs to be done. They should then be addressed as soon as possible. In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following standard for TODO formatting: Format:     // TODO : Owner Initials – Date Created – Description of task. Sample:     // TODO: CM – 2012/01/20 – Move this class to a new location so it can be reused. Using this pattern makes it easy to use the Resharper TODO explorer. The Carrot In order to make it easy for developers to apply this rule, a code snippet can be created in Visual Studio. Even better, I created a Resharper template. This gives the facility to use the current user name and current date macros. image This actually makes the formatting look like this. Sample:     // TODO: cmott – 2012/01/20 – Move this class to a new location so it can be reused. The Stick How to you enforce such a rule? I tried to create a custom Resharper Highlighting Pattern to perform custom code analysis inspection for deviations from this pattern. However, I did not have any success. The find dialog would not accept // text. If I work it out, I will update this blog post. StyleCop Instead I created a custom StyleCop rule. I followed the approach used with the StyleCop Contrib project. This provides a simple to use base class and easy to use unit testing framework. I will upload this todo format analyzer as a patch to that project. image

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >