Search Results

Search found 14900 results on 596 pages for 'git remote repository'.

Page 241/596 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • After mounting using sshfs I cannot commit my changes using subversion

    - by robUK
    Hello, local machine: Fedora 13 Subversion: 1.6.9 remote machine: CentSO 5.3 subversion 1.4.2 I have a project which is on the remote machine: [email protected]:projects/ssd1 I have mounted this on my local machine: sshfs [email protected]:projects/ssd1 /home/jbloggs/projects/mnt/ssd1 Everything mounts ok. So I open my project using GNU Emacs 23.2.1. When I want to comment my changes in emacs I get the following error: can't move /home/jbloggs/projects/mnt/ssd1/.svn/tmp/entries to /home/jbloggs/mnt/ssd1/.svn/entries: Operation not permitted Does anyone know of any way I can resolve this issue? many thanks for any advice,

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • Java me : Can we retrieve bluetooth address of connected device from an open slave connection ?

    - by Rohit
    Here is a typical sequence of events that occur : Host device opens a service ( Host device accepts and opens all incoming connections) A remote device connects to host device. Now, we have a slave connection open at host device. At host device, I want to know the bluetooth address of remote device. I can always pass it as data from remote to host device, but can I extract it from connection object somehow without any data transfer? Thanks in advance...

    Read the article

  • How do I access Windows Event Viewer log data from Java

    - by MatthieuF
    Is there any way to access the Windows Event Log from a java class. Has anyone written any APIs for this, and would there be any way to access the data from a remote machine? The scenario is: I run a process on a remote machine, from a controlling Java process. This remote process logs stuff to the Event Log, which I want to be able to see in the controlling process. Thanks in advance.

    Read the article

  • How to strengthen Mysql database server Security?

    - by i need help
    If we were to use server1 for all files (file server), server2 for mysql database (database server). In order for websites in server1 to access to the database in server2, isn't it needed to connect to to ip address of second (mysql server) ? In this case, is remote mysql connection. However, I seen from some people comment on the security issue. remote access to MySQL is not very secure. When your remote computer first connects to your MySQL database, the password is encrypted before being transmitted over the Internet. But after that, all data is passed as unencrypted "plain text". If someone was able to view your connection data (such as a "hacker" capturing data from an unencrypted WiFi connection you're using), that person would be able to view part or all of your database. So I just wondering ways to secure it? Allow remote mysql access from server1 by allowing the static ip adress allow remote access from server 1 by setting port allowed to connect to 3306 change 3306 to other port? Any advice?

    Read the article

  • SFTP in Python? (platform independent)

    - by Mark Wilbur
    I'm working on a simple tool that transfers files to a hard-coded location with the password also hard-coded. I'm a python novice, but thanks to ftplib, it was easy: import ftplib info= ('someuser', 'password') #hard-coded def putfile(file, site, dir, user=(), verbose=True): """ upload a file by ftp to a site/directory login hard-coded, binary transfer """ if verbose: print 'Uploading', file local = open(file, 'rb') remote = ftplib.FTP(site) remote.login(*user) remote.cwd(dir) remote.storbinary('STOR ' + file, local, 1024) remote.quit() local.close() if verbose: print 'Upload done.' if __name__ == '__main__': site = 'somewhere.com' #hard-coded dir = './uploads/' #hard-coded import sys, getpass putfile(sys.argv[1], site, dir, user=info) The problem is that I can't find any library that supports sFTP. What's the normal way to do something like this securely? Edit: Thanks to the answers here, I've gotten it working with Paramiko and this was the syntax. import paramiko host = "THEHOST.com" #hard-coded port = 22 transport = paramiko.Transport((host, port)) password = "THEPASSWORD" #hard-coded username = "THEUSERNAME" #hard-coded transport.connect(username = username, password = password) sftp = paramiko.SFTPClient.from_transport(transport) import sys path = './THETARGETDIRECTORY/' + sys.argv[1] #hard-coded localpath = sys.argv[1] sftp.put(localpath, path) sftp.close() transport.close() print 'Upload done.' Thanks again!

    Read the article

  • A Security (encryption) Dilemma

    - by TravisPUK
    I have an internal WPF client application that accesses a database. The application is a central resource for a Support team and as such includes Remote Access/Login information for clients. At the moment this database is not available via a web interface etc, but one day is likely to. The remote access information includes the username and passwords for the client's networks so that our client's software applications can be remotely supported by us. I need to store the usernames and passwords in the database and provide the support consultants access to them so that they can login to the client's system and then provide support. Hope this is making sense. So the dilemma is that I don't want to store the usernames and passwords in cleartext on the database to ensure that if the DB was ever compromised, I am not then providing access to our client's networks to whomever gets the database. I have looked at two-way encryption of the passwords, but as they say, two-way is not much different to cleartext as if you can decrypt it, so can an attacker... eventually. The problem here is that I have setup a method to use a salt and a passcode that are stored in the application, I have used a salt that is stored in the db, but all have their weaknesses, ie if the app was reflected it exposes the salts etc. How can I secure the usernames and passwords in my database, and yet still provide the ability for my support consultants to view the information in the application so they can use it to login? This is obviously different to storing user's passwords as these are one way because I don't need to know what they are. But I do need to know what the client's remote access passwords are as we need to enter them in at the time of remoting to them. Anybody have some theories on what would be the best approach here? update The function I am trying to build is for our CRM application that will store the remote access details for the client. The CRM system provides call/issue tracking functionality and during the course of investigating the issue, the support consultant will need to remote in. They will then view the client's remote access details and make the connection

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • How to set the service endPoint URI dynamically in SOA Suite 11gR1 by Sylvain Grosjean’s

    - by JuergenKress
    Use Case : This example demonstrates how to get the URI of the backend service from a repository and how to set it dynamically to our partnerLink (dynamicPartnerLink). Implementation steps : Create a dvm file Create a BPEL component Add the endPointURI variable and assign the uri Set the endpointURI property in the invoke activity 1. Create a DVM file : In order to define our repository, we are going to use DVM (Data Value Maps) : For more explanation regarding DVM, you should read this documentation. 2. Create a BPEL Component : First you need to implement the simple bpel process like this : - The AssignPayload is used to set the inputvariable of our invoke activity. - The AssignEndpointURI is used to dynamically set the endPointURI variable from our DVM repository - The invoke activity to call the external service Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: human task,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,Sylvain Grosjean

    Read the article

  • Downgrade form php5 5.3.10 to php5 5.3.2 in ubuntu 12.04

    - by iori
    i wanted to install php5 5.3.2, so first deleted all the php5 files sudo apt-get purge php5 php5-cli php5-common php5-mysql and also delete the deb files form /var/cache/apt/archives so now there is no deb file on the system then i add this person repository sudo apt-add-repository ppa:sushkov/personal because he add php5.3.2 and then i updated it and upgraded it sudo apt-get update && sudo apt-get upgrade then i installed php5 sudo apt-get install php5 php5-cli php5-common php5-mysql now when i check the php version it says php5.3.10 and when i run this command sudo apt-cache show php5 it says Package: php5 Version: 5.3.15-1~dotdeb.0 Architecture: all Maintainer: Guillaume Plessis <[email protected]> Installed-Size: 0 Depends: libapache2-mod-php5 (>= 5.3.15-1~dotdeb.0) | libapache2-mod-php5filter (>= 5.3.15-1~dotdeb.0) | php5-cgi (>= 5.3.15-1~dotdeb.0) | php5-fpm (>= 5.3.15-1~dotdeb.0), php5-common (>= 5.3.15-1~dotdeb.0) Filename: dists/squeeze/php5/binary-i386/php5_5.3.15-1~dotdeb.0_all.deb now i dont know how to downgrade, is there any way that i change something in the repository and write sudo apt-get install php5 it will install php5.3.2 which i want instead of php5.3.10 Thanks

    Read the article

  • Announcing the latest update to Oracle VM Server for x86 2.2 Release

    - by Honglin Su
    More and more customers have discovered that Oracle delivers more value with Oracle VM compared with other server virtualization solutions, and we've seen the momentum that customers succeed with Oracle VM and leading partners support Oracle VM. Recently, Oracle VM server for x86 with Windows PV Drivers passed Microsoft SVVP requirements for Windows servers, which provides customers more confidence to deploy Microsoft Windows guest OS onto Oracle VM server for x86. Today I'm pleased to announce the Oracle VM server for x86 2.2.2 release. See the new features introduced in the 2.2.2 release. Expand the guest OS support to include Oracle Linux 6.x and Oracle Solaris 11 Express OS. For a complete list of guest OS support, please refer to the Oracle VM server x86 release note. The VMPInfo system information and cluster troubleshooting utility is provided with this release. Additional information on VMPInfo is also available in the following My Oracle Support Notes: 1263293.1 Post-installation check list for new Oracle VM Server 1290587.1 Performing Site Reviews and Cluster Troubleshooting with VMPInfo A new storage repository option to provide NFS mount options when creating a storage repository. For more information on this new parameter, see Oracle VM Server User Guide "Adding a Storage Repository". Updated OCFS2 cluster file system, libdhcp, device drivers. See details and additional enhancements at Oracle VM Server User Guide: New Features in Release 2.2.2. The Oracle VM Server for x86 ISO image is available for download at Oracle's E-Delivery site. If you've subscribed to Oracle's Unbreakable Linux Network (ULN), you can simply run up2date command to update the server. Please refer to Oracle VM Upgrade Guide: Upgrading Oracle VM Server. There's no change to Oracle VM Manager, which remains at 2.2.0 with the patch 2.2-16. If you have any questions about Oracle VM Serer for x86, you may post your questions at OTN discussion forum; or purchase support for Oracle Unbreakable Linux and Oracle VM. For Oracle's x86 systems, Oracle VM support as well Oracle Linux and Oracle Solaris support are included in the Oracle Premier Support for Systems. For more information about Oracle's virtualization, visit oracle.com/virtualization.

    Read the article

  • Keep Oracle VM 3 Up to Date

    - by Honglin Su
    More and more customers turn to Oracle VM 3 to virtualize their enterprise applications. Oracle VM support subscription is an integrated part of their successes. Customers enjoy the benefit of the industry-leading global support 24x7 for their server virtualization implementation, and receive access to patches, fixes, and updates via Unbreakable Linux Network (ULN). For customers running Oracle systems, Oracle VM support is included in Oracle Premium Support for Systems at no extra cost, and customers receive comprehensive systems coverage that includes single point accountability for Oracle server and storage hardware; integrated software (for example, firmware); and operating system software (Oracle Solaris, Oracle Linux, and Oracle VM). To run a successful virtualization infrastructure, it's important to keep Oracle VM 3 environment up to date by leveraging Oracle VM support resources.  Oracle VM Server Updates: You can easily upgrade Oracle VM Server using a Yum repository. You can download the latest server patch updates from ULN. To receive notification on the software update delivered to Oracle ULN for Oracle VM, you can sign up here. For information on setting up an Oracle VM Server Yum repository and using Oracle VM Manager to perform the upgrade of Oracle VM Servers, see Updating and Upgrading Oracle VM Servers in the Oracle VM User's Guide .  Oracle VM Manager Updates: Get the download instructions at OTN, and apply latest Oracle VM Manager patch. Be sure to review the patch README before you apply the patches. Support customers have access to extremely valuable knowledge notes from My Oracle Support. They are the first to receive useful tips to help address issues in Oracle VM deployments. For example, Upgrade to Oracle VM 3.1.1 using Yum Repository may cause network configuration scripts to be renamed causing network failure after reboot (Doc ID 1464126.1) Oracle VM server reboots after network becomes unresponsive due to deep C-State power management setting (Doc ID 1440197.1) For more information about Oracle's virtualization, visit oracle.com/virtualization.

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Are there open source alternatives to Bitbucket, Github, Kiln, and similar DVCS browsing and management tools?

    - by Ryan Taylor
    I am aware of several tools/services that provide DVCS browsing and management such as Bitbucket, Github, Kiln, SCM-Manager and Rhodecode. However, the use case I am considering is one such that: Any source code must reside on an employers internal servers. The solution must be open source. It should provide a Bitbucket or Github like experience, including a project wiki, repository browsing and management, and social coding aspects such as code review. The solution should have mercurial support (if not support for other DVCSs). Of these, only SCM-Manager and RhodeCode come close as they can be installed on your own servers and are open source. However they do not have the Bitbucket or Github experience. There is no issue tracker or wiki and the UI, while functional, is not up to par with Github or Bitbucket. I can get close with Trac or Redmine with their repository browsers but unfortunately they do not have any repository management capabilities. Are there other open source tools out there that would provide a similar experience to Bitbucket, Github or Kiln?

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • Remap keyboard Ubuntu 12.04; Asus Q500A

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; linux kernel 3.8.0-32-generic. I am using gnome-panel, and xserver-xorg-lts-raring. I have been experiencing problems with the keyboard short-cuts since I had a fresh install. fn+f10 is supposed to mute my system, but instead it will repeatedly press d. fn+f11 is volume down, but it presses c. fn+f12 is volume up, presses b repeatedly. Most of the other on-board short-cuts such as adjusting screen and led brightness work most of the time, but sometimes press other letters repeatedly. Also, sometimes my cntr gets held down for no reason. Everything works fine in windows. I have tried installing all recommends and sudo dpkg-reconfigure -a to reconfigure all packages, which did not solve my problem. I have tried using KeyTouch editor to edit keymaps, navigating to /usr/shar/x11/xkb/keymap when I try opening any of these files it says file contains no keyboard element. I think If I were just able to remap my keyboard it might solve my issues, otherwise if anyone knows where I can get asus drivers for 12.04 please let me know Apparently I didn't have all repositories enabled. I executed the following commands and am trying the updates they give me. Getting linux_kernel 3.8.0-33 generic as well as a bunch of other packages. sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse" sudo add-apt-repository "deb http://archive.canonical.com/ubuntu $(lsb_release -sc) partner"

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Ubuntu 12.04 stopped recognizing my BenQ monitor and reduced resolution to 1024x768

    - by Omri
    A few days ago I installed Ubuntu 12.04 32bit Desktop. It recognized my hardware without a problem (at least that I know of) and all worked fine. I left my system running (it is at work) through the night because it is also working as a database server and when I came today to work the resolution was 1024x768 (the monitor recommends 1920x1080) even though in the Display section of the System Settings it was recognized as BenQ, and no higher resolution was offered. After a restart, the monitor name changed from BenQ to Unknown. This is a desktop computer. I also installed gtk-redshift and f.lux. I checked Additional Drivers to see if there is something I can install but it didn't find anything. I tried to Google it but I didn't find anything about a monitor stopping being recognized after it was already working. I did enable some PPAs yesterday, namely webupd8, mozillateam/thunderbird-stable and some other, and I also followed the instructions to patch the NotifyOSD to be more friendly: sudo add-apt-repository ppa:caffeine-developers/ppa sudo add-apt-repository ppa:leolik/leolik sudo apt-get update sudo apt-get upgrade sudo apt-get install libnotify-bin pkill notify-osd sudo add-apt-repository ppa:nilarimogard/webupd8 sudo apt-get update sudo apt-get install notifyosdconfig I now purged both caffeine-developers and leolik PPAs in the hope it will help, but no change. Has there been a change in the packages that could introduce this problem? Any help will be very appreciated :-) Omri

    Read the article

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >