Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 155/667 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • DHCPv6: Provide IPv6 information in your local network

    Even though IPv6 might not be that important within your local network it might be good to get yourself into shape, and be able to provide some details of your infrastructure automatically to your network clients. This is the second article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. IPv6 addresses for everyone (in your network) Okay, after setting up the configuration of your local system, it might be interesting to enable all your machines in your network to use IPv6. There are two options to solve this kind of requirement... Either you're busy like a bee and you go around to configure each and every system manually, or you're more the lazy and effective type of network administrator and you prefer to work with Dynamic Host Configuration Protocol (DHCP). Obviously, I'm of the second type. Enabling dynamic IPv6 address assignments can be done with a new or an existing instance of a DHCPd. In case of Ubuntu-based installation this might be isc-dhcp-server. The isc-dhcp-server allows address pooling for IP and IPv6 within the same package, you just have to run to independent daemons for each protocol version. First, check whether isc-dhcp-server is already installed and maybe running your machine like so: $ service isc-dhcp-server6 status In case, that the service is unknown, you have to install it like so: $ sudo apt-get install isc-dhcp-server Please bear in mind that there is no designated installation package for IPv6. Okay, next you have to create a separate configuration file for IPv6 address pooling and network parameters called /etc/dhcp/dhcpd6.conf. This file is not automatically provided by the package, compared to IPv4. Again, use your favourite editor and put the following lines: $ sudo nano /etc/dhcp/dhcpd6.conf authoritative;default-lease-time 14400; max-lease-time 86400;log-facility local7;subnet6 2001:db8:bad:a55::/64 {    option dhcp6.name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;    option dhcp6.domain-search "ios.mu";    range6 2001:db8:bad:a55::100 2001:db8:bad:a55::199;    range6 2001:db8:bad:a55::/64 temporary;} Next, save the file and start the daemon as a foreground process to see whether it is going to listen to requests or not, like so: $ sudo /usr/sbin/dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf eth0 The parameters are explained quickly as -6 we want to run as a DHCPv6 server, -d we are sending log messages to the standard error descriptor (so you should monitor your /var/log/syslog file, too), and we explicitely want to use our newly created configuration file (-cf). You might also use the command switch -t to test the configuration file prior to running the server. In my case, I ended up with a couple of complaints by the server, especially reporting that the necessary lease file wouldn't exist. So, ensure that the lease file for your IPv6 address assignments is present: $ sudo touch /var/lib/dhcp/dhcpd6.leases$ sudo chown dhcpd:dhcpd /var/lib/dhcp/dhcpd6.leases Now, you should be good to go. Stop your foreground process and try to run the DHCPv6 server as a service on your system: $ sudo service isc-dhcp-server6 startisc-dhcp-server6 start/running, process 15883 Check your log file /var/log/syslog for any kind of problems. Refer to the man-pages of isc-dhcp-server and you might check out Chapter 22.6 of Peter Bieringer's IPv6 Howto. The instructions regarding DHCPv6 on the Ubuntu Wiki are not as complete as expected and it might not be as helpful as this article or Peter's HOWTO. But see for yourself. Does the client get an IPv6 address? Running a DHCPv6 server on your local network surely comes in handy but it has to work properly. The following paragraphs describe briefly how to check the IPv6 configuration of your clients, Linux - ifconfig or ip command First, you have enable IPv6 on your Linux by specifying the necessary directives in the /etc/network/interfaces file, like so: $ sudo nano /etc/network/interfaces iface eth1 inet6 dhcp Note: Your network device might be eth0 - please don't just copy my configuration lines. Then, either restart your network subsystem, or enable the device manually using the dhclient command with IPv6 switch, like so: $ sudo dhclient -6 You would either use the ifconfig or (if installed) the ip command to check the configuration of your network device like so: $ sudo ifconfig eth1eth1      Link encap:Ethernet  HWaddr 00:1d:09:5d:8d:98            inet addr:192.168.160.147  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: 2001:db8:bad:a55::193/64 Scope:Global          inet6 addr: fe80::21d:9ff:fe5d:8d98/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 Looks good, the client has an IPv6 assignment. Now, let's see whether DNS information has been provided, too. $ less /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 2001:4860:4860::8888nameserver 2001:4860:4860::8844nameserver 192.168.1.2nameserver 127.0.1.1search ios.mu Nicely done. Windows - netsh Per description on TechNet the netsh is defined as following: "Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. Netsh can also save a configuration script in a text file for archival purposes or to help you configure other servers." And even though TechNet states that it applies to Windows Server (only), it is also available on Windows client operating systems, like Vista, Windows 7 and Windows 8. In order to get or even set information related to IPv6 protocol, we have to switch the netsh interface context prior to our queries. Open a command prompt in Windows and run the following statements: C:\Users\joki>netshnetsh>interface ipv6netsh interface ipv6>show interfaces Select the device index from the Idx column to get more details about the IPv6 address and DNS server information (here: I'm going to use my WiFi device with device index 11), like so: netsh interface ipv6>show address 11 Okay, address information has been provided. Now, let's check the details about DNS and resolving host names: netsh interface ipv6> show dnsservers 11 Okay, that looks good already. Our Windows client has a valid IPv6 address lease with lifetime information and details about the configured DNS servers. Talking about DNS server... Your clients should be able to connect to your network servers via IPv6 using hostnames instead of IPv6 addresses. Please read on about how to enable a local named with IPv6.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • Instructions on how to configure a WebLogic Cluster and use it with Oracle Http Server

    - by Laurent Goldsztejn
    On October 17th I delivered a webcast on WebLogic Clustering that included a demo with Apache as the proxy server.  I realized that many steps are needed to set up the configuration I used during the demo.  The purpose of this article is to go through these steps to show how quickly and easily one can define a new cluster and then proxy requests via an Oracle Http Server (OHS). The domain configuration wizard offers the option to create a cluster.  The administration console or WLST, the Weblogic scripting tool can also be used to define a new cluster.  It can be created at any time but the servers that will participate in it cannot be in a running state. Cluster Creation using the configuration wizard Network and architecture requirements need to be considered while choosing between unicast and multicast. Multicast Vs. Unicast with WebLogic Clustering is of great help to make the best decision between the two messaging modes.  In addition, Configure Cluster offers details on each single field displayed above. After this initial configuration page, individual servers could be assigned to this newly created cluster although servers can be added later to the cluster.  What is not recommended is for the Admin server to participate in a cluster as the main purpose of the Admin server is to perform the bulk of the processing for the domain.  Servers need to stop before being assigned to a cluster.  There is also no minimum number of servers that have to participate in the cluster. At this point the configuration should be done and the cluster created successfully.  This can easily be verified from the console. Each clustered managed server can be launched to join the cluster.   At startup the following messages should be logged for each clustered managed server: <Notice> <WeblogicServer> <BEA-000365> <Server state changed to STARTING> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using messaging_mode cluster messaging> <Notice> <Cluster> <BEA-000133> <Waiting to synchronize with other running members of cluster_name>  It's time to try sending requests to the cluster and we will do this with the help of Oracle Http Server to play the role of a proxy server to demonstrate load balancing.  Proxy Server configuration  The first step is to download Weblogic Server Web Server Plugin that will enhance the web server by handling requests aimed at being sent to the Weblogic cluster.  For our test Oracle Http Server (OHS) will be used.  However plug-ins are also available for Apache Http server, Microsoft Internet Information Server (IIS), Oracle iPlanet Webserver or even WebLogic Server with the HttpClusterServlet. Once OHS is installed on the system, the configuration file, mod_wl_ohs.conf, will need to be altered to include Weblogic proxy specifics. First of all, add the following directive to instruct Apache to load the Weblogic shared object module extracted from the plugins file just downloaded. LoadModule weblogic_module modules/mod_wl_ohs.so and then create an IfModule directive to encapsulate the following location block so that proxy will be enabled by path (each request including /wls will be directed directly to the WebLogic Cluster).  You could also proxy requests by MIME type using MatchExpression in the Location block. <IfModule weblogic_module> <Location /wls>    SetHandler weblogic-handler    PathTrim /wls    WebLogicCluster MS1_URL:port,MS2_URL:port    Debug ON    WLLogFile        c:/tmp/global_proxy.log     WLTempDir        "c:/myTemp"    DebugConfigInfo  On </Location> </IfModule> SetHandler specifies the handler for the plug-in module  PathTrim will instruct the plug-in to trim /w ls from the URL before forwarding the request to the cluster. The list of WebLogic Servers defined in WeblogicCluster could contain a mixed set of clustered and single servers.  However, the dynamic list returned for this parameter will only contain valid clustered servers and may contain more servers if not all clustered servers are listed in WeblogicCluster. Testing proxy and load balancing It's time to start OHS web server which should at this point be configured correctly to proxy requests to the clustered servers.  By default round-robin is the load balancing strategy set by WebLogic. Testing the load balancing can be easily done by disabling cookies on your browser given that a request containing a cookie attempts to connect to the primary server. If that attempt fails, the plug-in attempts to make a connection to the next available server in the list in a round-robin fashion.  With cookies enabled, you could use two different browsers to test the load balancing with a JSP page that contains the following: <%@ page contentType="text/html; charset=iso-8859-1" language="java"  %>  <%  String path = request.getContextPath();   String getProtocol=request.getScheme();   String getDomain=request.getServerName();   String getPort=Integer.toString(request.getLocalPort());   String getPath = getProtocol+"://"+getDomain+":"+getPort+path+"/"; %> <html> <body> Receiving Server <%=getPath%> </body> </html>  Assuming that you name the JSP page Test.jsp and the webapp that contains it TestApp, your browsers should open the following URL: http://localhost/wls/TestApp/Test.jsp  Each browser should connect to a different clustered server and this simple JSP should confirm that.  The webapp that contains the JSP needs to be deployed to the cluster. You can also verify that the load is correctly balanced by looking at the proxy log file.  Each request generates a set of log entries that starts with : timestamp ================New Request: Each request is associated with a primary server and a secondary server if one is available.  For our test request, the following entries should appear in the log as well:Using Uri /wls/TestApp/Test.jsp After trimming path: '/TestApp/Test.jsp' The final request string is '/TestApp/Test.jsp' If an exception occurs, it should also be logged in the proxy log file with the prefix:timestamp *******Exception type   WeblogicBridgeConfig DebugConfigInfo enables runtime statistics and the production of configuration information.  For security purposes, this parameter should be turned off in production. http://webserver_host:port/path/xyz.jsp?__WebLogicBridgeConfig will display a proxy bridge page detailing the plugin configuration followed by runtime statistics which could help in diagnosing issues along with the analyzing of the proxy log file.  In our example the url would be: http://localhost/wls/TestApp/Test.jsp?__WebLogicBridgeConfig  Here is how the top section of the screen can look like: The bottom part of the page contains runtime statistics, here is a snippet of it (unrelated with the previous JSP example).   This entire plugin configuration should be very similar with other web servers, what varies is the name of the proxy server configuration file. So, as you can see, it only takes a few minutes to configure a Weblogic cluster and get servers to join it. 

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Which tools should be used for data migration between environments?

    - by Paula Speranza-Hadley
    Ø  With the Oracle Utilities Application Framework based products there are a number of tools provided that can be used to transfer data from one environment to another. Ø  There are three main tools that implementations use: §  ConfigLab - A configurable copy facility is metadata aware and therefore understands the relationships between objects and by invoking the relevant maintenance objects validates the data copied. This utility uses the object validation to help ensure data integrity. Basically it is a set of configuration tables and a set of batch jobs to perform the migration of data. §  Bundling - A configurable release management tool that allows exporting of Advanced Configuration Environment based objects (business services, business objects, UI Maps etc) from one environment to another. §  Blueprint - An Oracle Utilities Software Development Kit (SDK) based tool to import metadata from the development environment to your initial testing environment. The utility is command line based and basically uses a text based configuration file to drive the utility on the source and target sides. Ø  Each tool has a role in an implementation but you must be careful to use the right tool for the right job within an implementation. The suggestions are as follows: §  Only use the Blueprint tool for migrating data from your development platform to your initial test environment. The blueprint tool is not designed to move large amounts of data and certainly is risky, if not used correctly, and can potentially break the integrity of your data. §  The SDK provides the configuration data that it is used for (mainly meta-data). This should not be extended as, while it can perform data migration on any data, it is not efficient and risky for certain types of configuration data. Ø  Additional information can be found in the following whitepaper:  Oracle Utilities Application Framework - Release Management - Software Configuration Management on MyOracle.com

    Read the article

  • Monitoring settings in a configsection of your app.config for changes

    - by dotjosh
    The usage:public static void Main() { using(var configSectionAdapter = new ConfigurationSectionAdapter<ACISSInstanceConfigSection>("MyConfigSectionName")) { configSectionAdapter.ConfigSectionChanged += () => { Console.WriteLine("File has changed! New setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); }; Console.WriteLine("The initial setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); Console.ReadLine(); } }  The meat: public class ConfigurationSectionAdapter<T> : IDisposable where T : ConfigurationSection { private readonly string _configSectionName; private FileSystemWatcher _fileWatcher; public ConfigurationSectionAdapter(string configSectionName) { _configSectionName = configSectionName; StartFileWatcher(); } private void StartFileWatcher() { var configurationFileDirectory = new FileInfo(Configuration.FilePath).Directory; _fileWatcher = new FileSystemWatcher(configurationFileDirectory.FullName); _fileWatcher.Changed += FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = true; } private void FileWatcherOnChanged(object sender, FileSystemEventArgs args) { var changedFileIsConfigurationFile = string.Equals(args.FullPath, Configuration.FilePath, StringComparison.OrdinalIgnoreCase); if (!changedFileIsConfigurationFile) return; ClearCache(); OnConfigSectionChanged(); } private void ClearCache() { ConfigurationManager.RefreshSection(_configSectionName); } public T ConfigSection { get { return (T)Configuration.GetSection(_configSectionName); } } private System.Configuration.Configuration Configuration { get { return ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); } } public delegate void ConfigChangedHandler(); public event ConfigChangedHandler ConfigSectionChanged; protected void OnConfigSectionChanged() { if (ConfigSectionChanged != null) ConfigSectionChanged(); } public void Dispose() { _fileWatcher.Changed -= FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = false; _fileWatcher.Dispose(); } }

    Read the article

  • foreign-architecture

    - by speedy-MACHO
    Always when I install something, I get the following error multiple times: Unknown configuration key 'foreign-architecture' found in your 'dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with 'dpkg --add-architecture' invocations at the command line. When I try dpkg --add-architecture I get: Unknown configuration key `foreign-architecture' found in your `dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with `dpkg --add-architecture' invocations at the command line. dpkg: error: --add-architecture takes one argument Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! I've no problems yet, but since it says This warning will become a hard error at a later date I better do something about this. When I search 'foreign-architecture', I find an empty file, containing not a single byte. I somehow can't delete that file. Please help, it's a kind of creapy...

    Read the article

  • svcbundle for easier SMF manifest creation

    - by Glynn Foster
    One of the new features we've introduced in Oracle Solaris 11.1 is a new utility called svcbundle. This utility allows for the easy creation of Service Management Facility (SMF) manifests and profiles, allowing you to take advantage of the benefits of automatic application restart without requiring you to have full knowledge of the XML file format that is used when integrating with the SMF. Integrating into SMF is one of the easiest and most obvious ways to take advantage of some of the mission critical aspects of the operating system, but many customers were often finding the initial learning curve of creating an XML manifest to be too hard. With the release of Oracle Solaris 11 11/11, SMF had a more integral part to play as more and more system configuration was starting to use the SMF configuration repository for the backend storage. This provides a number of aspects, including the ability to carefully manage customized administrative configuration, site specific configuration, and vendor provided configuration at different layers, helping to preserve them during system update. I've written a new article, Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11, to give you a feel for the help we can provide in converting your applications over to SMF, or doing some site wide configuration using profiles. This is the first pass at creating such a tool, so we'd love to hear feedback of your experiences using it.

    Read the article

  • How do you completely reset monitor configuration (e.g. what used to be dpkg-reconfigure xserver-xorg)?

    - by John
    Using unity 11.04. I have a secondary monitor (actually two, one at home, one at work). The home twin monitor set up works perfect (Viewsonic monitor). At work, I get very buggy behaviour, such as full screen applications 'ghosting' when maximized, and other strange effects. I would like to try and completely reset the monitor configurations (not unity or compiz), before doing anything else. In 10.10 this would be accomplished: dpkg-reconfigure xserver-xorg

    Read the article

  • VMWare Lab Manager: What's the best way to build Library Configurations?

    - by mcohen75
    We're using Lab Manager within our QA group. We use it to quickly deliver environments we need for testing. We have 25 Templates, 14 Library Configurations and counting. To build up our templates we: Create a base template that is a bare bones version of Server 2008 + basic configuration (Windows Update, Firewall exceptions) Create a linked clone for each Server template we need (SQL Server 08, 05, etc) Repeat for other OS's, like Windows 7 and Windows XP Then we create configurations: Create a workspace configuration with multiple images in it (Say Server 08 w/SQL Server and Windows 7) Deploy the configuration and make some minor configuration changes Undeploy and Capture to Library How do we keep this manageable? When I need to update a configuration, should I: Rebuild it from templates Clone it to a workspace, make changes, recapture it to the library keep the configuration in my workspace (don't delete it after capturing it to library), deploy it to make changes and then re-capture to library

    Read the article

  • I get this error after upgade. please help

    - by user203404
    dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.2.1~); however: Version of initramfs-tools-bin on system is 0.103ubuntu0.2. klibc-utils (2.0.1-1ubuntu2) breaks initramfs-tools (<< 0.103) and is installed. Version of initramfs-tools to be configured is 0.99ubuntu13.2. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of plymouth: plymouth depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing plymouth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mountall: mountall depends on plymouth; however: Package plymouth is not configured yet. dpkg: error processing mountall (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of initscripts: initscripts depends on mountall (>= 2.28); however: Package mountall is not configured yet. dpkg: error processing initscripts (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of upstart: upstart depends on initscripts; however: Package initscripts is not configured yet. upstart depends on mountall; however: Package mountall is not configured yet. dpkg: error processing upstart (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of passwd: passwd depends on upstart-job; however: Package upstart-job is not installed. Package upstart which provides upstart-job is not configured yet. dpkg: error processing passwd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools plymouth mountall initscripts upstart passwd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Java Hibernate/C3P0 error: "Could not obtain connection metadata. An attempt by a client to checkout

    - by Allan
    I'm trying to get some code I was passed up and running. It appears to use the Hibernate framework. I've gotten past most of the errors tweaking the configuration, but this one has me dead stumped. It's trying to connect to two databases: gameapp and gamelog. Both exist. It seems to have issues connecting to gamelog, but none connecting to gameapp (later in the init, it connects to and loads other DBs just fine). Below, I've pasted the error and exception stack dump. I imaging there's something else in the configs, so I've also included the configuration file for that db. I know this is very vague, but I'm hoping some pro can see the stupid mistake I'm missing. <?xml version="1.0" encoding="GBK"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="dialect">org.hibernate.dialect.MySQLDialect</property> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://127.0.0.1:3306/gamelog</property> <property name="connection.username">root</property> <property name="connection.password"></property> <property name="connection.useUnicode">true</property> <property name="connection.characterEncoding">UTF-8</property> <property name="hibernate.jdbc.batch_size">100</property> <property name="jdbc.fetch_size">1</property> <property name="hbm2ddl.auto">none</property><!-- update --> <property name="connection.useUnicode">true</property> <property name="show_sql">true</property> <!-- c3p0-configuration --> <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="hibernate.c3p0.min_size">5</property> <property name="hibernate.c3p0.max_size">10</property> <property name="hibernate.c3p0.timeout">30</property> <property name="hibernate.c3p0.idle_test_period">30</property> <property name="hibernate.c3p0.max_statements">0</property> <property name="hibernate.c3p0.acquire_increment">5</property> </session-factory> </hibernate-configuration> Exception and stack trace: 2010-04-30 17:50:00,411 WARN [org.hibernate.cfg.SettingsFactory] - Could not obtain connection metadata java.sql.SQLException: An attempt by a client to checkout a Connection has timed out. at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:527) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) at org.hibernate.connection.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:35) at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:76) at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:1933) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1216) at com.database.hibernate.util.HibernateFactory.<init>(Unknown Source) at com.database.hibernate.util.HibernateUtil.<clinit>(Unknown Source) at com.server.databaseop.goodOp.GoodOpImpl.initBreedGoods(Unknown Source) at com.server.databaseop.goodOp.GoodOpImpl.access$000(Unknown Source) at com.server.databaseop.goodOp.GoodOpImpl$1.run(Unknown Source) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) Caused by: com.mchange.v2.resourcepool.TimeoutException: A client timed out while waiting to acquire a resource from com.mchange.v2.resourcepool.BasicResourcePool@ca470 -- timeout at awaitAvailable() at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1317) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525) ... 18 more

    Read the article

  • SecurityNegotiationException in WCF Service Hosted on IIS

    - by Ram
    Hi, I have hosted a WCF service on IIS. The configuration file is as follows <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.web.extensions> <scripting> <webServices> <!-- Uncomment this section to enable the authentication service. Include requireSSL="true" if appropriate. <authenticationService enabled="true" requireSSL = "true|false"/> --> <!-- Uncomment these lines to enable the profile service, and to choose the profile properties that can be retrieved and modified in ASP.NET AJAX applications. <profileService enabled="true" readAccessProperties="propertyname1,propertyname2" writeAccessProperties="propertyname1,propertyname2" /> --> <!-- Uncomment this section to enable the role service. <roleService enabled="true"/> --> </webServices> <!-- <scriptResourceHandler enableCompression="true" enableCaching="true" /> --> </scripting> </system.web.extensions> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <system.serviceModel> <services> <service name="IISTest2.Service1" behaviorConfiguration="IISTest2.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="wsHttpBinding" contract="IISTest2.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="IISTest2.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> The client configuration file is as follows <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://yyy.zzz.xxx.net/IISTest2/Service1.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IService1" contract="ServTest.IService1" name="WSHttpBinding_IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> When I tried to access the service from client application, I got SecurityNegotiationException and details are Secure channel cannot be opened because security negotiation with the remote endpoint has failed. This may be due to absent or incorrectly specified EndpointIdentity in the EndpointAddress used to create the channel. Please verify the EndpointIdentity specified or implied by the EndpointAddress correctly identifies the remote endpoint. If I host the service on ASP .NET Dev server, it work well but if I host on IIS above mentioned error occurs. Thanks, Ram

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • [JDBCExceptionReporter] SQL Warning in Hibernate

    - by adisembiring
    Hi all, I'm using Hibernate 3.2.1 and database SQLServer2000 while I'm try to insert some data using my dao, some warning occurred like this: java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to BTN_SPP_DB at com.microsoft.jdbc.base.BaseWarnings.createSQLWarning(Unknown Source) at com.microsoft.jdbc.base.BaseWarnings.get(Unknown Source) at com.microsoft.jdbc.base.BaseConnection.getWarnings(Unknown Source) at org.hibernate.util.JDBCExceptionReporter.logAndClearWarnings(JDBCExceptionReporter.java:22) at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:443) at org.hibernate.jdbc.ConnectionManager.aggressiveRelease(ConnectionManager.java:400) at org.hibernate.jdbc.ConnectionManager.afterTransaction(ConnectionManager.java:287) at org.hibernate.jdbc.JDBCContext.afterTransactionCompletion(JDBCContext.java:221) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:119) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:43) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:1) at co.id.hanoman.btnmw.spp.dao.test.TagihanDaoTest.testSave(TagihanDaoTest.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99) at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75) at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45) at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66) at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35) at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) my hibernate initialization is: 2010-04-26 22:54:05,203 INFO [Version] Hibernate Annotations 3.3.0.GA 2010-04-26 22:54:05,234 INFO [Environment] Hibernate 3.2.1 2010-04-26 22:54:05,234 INFO [Environment] hibernate.properties not found 2010-04-26 22:54:05,234 INFO [Environment] Bytecode provider name : cglib 2010-04-26 22:54:05,234 INFO [Environment] using JDK 1.4 java.sql.Timestamp handling 2010-04-26 22:54:05,343 INFO [Configuration] configuring from resource: /hibernate.cfg.xml 2010-04-26 22:54:05,343 INFO [Configuration] Configuration resource: /hibernate.cfg.xml 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] trying to resolve system-id [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] recognized hibernate namespace; attempting to resolve on classpath under org/hibernate/ 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] located [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] in classpath 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.dialect=org.hibernate.dialect.SQLServerDialect 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.driver_class=com.microsoft.jdbc.sqlserver.SQLServerDriver 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.url=jdbc:microsoft:sqlserver://12.56.11.65:1433;databaseName=BTN_SPP_DB 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.username=spp 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.password=spp

    Read the article

  • java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to X

    - by adisembiring
    Hi all, I'm using Hibernate 3.2.1 and database SQLServer2000 while I'm try to insert some data using my dao, some warning occurred like this: java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to BTN_SPP_DB at com.microsoft.jdbc.base.BaseWarnings.createSQLWarning(Unknown Source) at com.microsoft.jdbc.base.BaseWarnings.get(Unknown Source) at com.microsoft.jdbc.base.BaseConnection.getWarnings(Unknown Source) at org.hibernate.util.JDBCExceptionReporter.logAndClearWarnings(JDBCExceptionReporter.java:22) at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:443) at org.hibernate.jdbc.ConnectionManager.aggressiveRelease(ConnectionManager.java:400) at org.hibernate.jdbc.ConnectionManager.afterTransaction(ConnectionManager.java:287) at org.hibernate.jdbc.JDBCContext.afterTransactionCompletion(JDBCContext.java:221) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:119) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:43) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:1) at co.id.hanoman.btnmw.spp.dao.test.TagihanDaoTest.testSave(TagihanDaoTest.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99) at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75) at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45) at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66) at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35) at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) my hibernate initialization log is: 2010-04-26 22:54:05,203 INFO [Version] Hibernate Annotations 3.3.0.GA 2010-04-26 22:54:05,234 INFO [Environment] Hibernate 3.2.1 2010-04-26 22:54:05,234 INFO [Environment] hibernate.properties not found 2010-04-26 22:54:05,234 INFO [Environment] Bytecode provider name : cglib 2010-04-26 22:54:05,234 INFO [Environment] using JDK 1.4 java.sql.Timestamp handling 2010-04-26 22:54:05,343 INFO [Configuration] configuring from resource: /hibernate.cfg.xml 2010-04-26 22:54:05,343 INFO [Configuration] Configuration resource: /hibernate.cfg.xml 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] trying to resolve system-id [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] recognized hibernate namespace; attempting to resolve on classpath under org/hibernate/ 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] located [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] in classpath 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.dialect=org.hibernate.dialect.SQLServerDialect 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.driver_class=com.microsoft.jdbc.sqlserver.SQLServerDriver 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.url=jdbc:microsoft:sqlserver://12.56.11.65:1433;databaseName=BTN_SPP_DB 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.username=spp 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.password=spp

    Read the article

  • Mysql trigger to block a row delete

    - by rtacconi
    I wuold like to define a trigger to block the deletion of the row with ID 2 of the configuration table, you might guess why, I am trying something like that: CREATE TRIGGER do_not_delete_configuration_1 BEFORE DELETE ON configuration FOR EACH ROW BEGIN IF (OLD.configurationid != 1) THEN DELETE FROM configuration WHERE configuration.configuration=OLD.configurationid; END IF; END; | without a positive result.

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • php-common causing conflict

    - by Cornwell
    I'm trying to install php-pdo but it always fails because of php-common bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Installing php-common I get: bash-3.2# yum install php-common Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do Searched here and google but couldn't find anything that worked EDIT: Added new repositories: bash-3.2# yum install php-common Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirror.5ninesolutions.com * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirror.team-cymru.org addons | 1.9 kB 00:00 base | 1.1 kB 00:00 centosplus | 1.9 kB 00:00 centosplus/primary_db | 53 kB 00:01 contrib | 1.9 kB 00:00 contrib/primary_db | 1.1 kB 00:00 epel | 3.6 kB 00:00 extras | 2.1 kB 00:00 nginx | 2.5 kB 00:00 update | 1.9 kB 00:00 update/primary_db | 84 kB 00:04 updates | 1.9 kB 00:00 Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do And then... bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: centos.mirror.lstn.net * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirrors.loosefoot.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • Hibernate unknown entity (not missing @Entity or import javax.persistence.Entity )

    - by david99world
    I've got a really simple class... import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "users") public class User { @Column(name = "firstName") private String firstName; @Column(name = "lastName") private String lastName; @Column(name = "email") private String email; @Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name = "id") private long id; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public long getId() { return id; } public void setId(long id) { this.id = id; } } I call it using... public class Main { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub HibernateUtil.buildSessionFactory(); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); User u = new User(); u.setEmail("[email protected]"); u.setFirstName("David"); u.setLastName("Gray"); session.save(u); session.getTransaction().commit(); System.out.println("Record committed"); session.close(); } } I keep getting... Exception in thread "main" org.hibernate.MappingException: Unknown entity: org.assessme.com.entity.User at org.hibernate.internal.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:1172) at org.hibernate.internal.SessionImpl.getEntityPersister(SessionImpl.java:1316) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:117) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:204) at org.hibernate.event.internal.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:55) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:189) at org.hibernate.event.internal.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:49) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:90) at org.hibernate.internal.SessionImpl.fireSave(SessionImpl.java:670) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:662) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:658) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352) at $Proxy4.save(Unknown Source) at Main.main(Main.java:20) hibernateUtil is... import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.hibernate.service.ServiceRegistryBuilder; public class HibernateUtil { private static SessionFactory sessionFactory; private static ServiceRegistry serviceRegistry; public static SessionFactory buildSessionFactory() { try { // Create the SessionFactory from hibernate.cfg.xml Configuration configuration = new Configuration(); configuration.configure(); serviceRegistry = new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry(); return new Configuration().configure().buildSessionFactory(serviceRegistry); } catch (Throwable ex) { // Make sure you log the exception, as it might be swallowed System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { sessionFactory = new Configuration().configure().buildSessionFactory(serviceRegistry); return sessionFactory; } } does anyone have any ideas as I've looked at so many duplicates but the resolutions don't appear to work for me. hibernate.cfg.xml shown below... <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/ssme</property> <property name="connection.username">root</property> <property name="connection.password">mypassword</property> <!-- JDBC connection pool (use the built-in) --> <property name="connection.pool_size">1</property> <!-- SQL dialect --> <property name="dialect">org.hibernate.dialect.MySQLDialect</property> <!-- Enable Hibernate's automatic session context management --> <property name="current_session_context_class">thread</property> <!-- Disable the second-level cache --> <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Echo all executed SQL to stdout --> <property name="show_sql">true</property> <!-- Drop and re-create the database schema on startup --> <property name="hbm2ddl.auto">update</property> </session-factory> </hibernate-configuration>

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • BizTalk Server 2013 beta on Windows 8 (with Visual Studio 2012, SQL Server 2012 &amp; ESB Toolkit 2.2)

    - by Vishal
    Hello BizTalkers, Finally, Microsoft released the beta version of BizTalk Server 2010 R2 and now its called BizTalk Server 2013. I had tried the BTS 2010 R2 CTP version on Windows Azure VM and particularly I was excited about the RESTful services support and ESB fully integrated into BizTalk. Well didn’t get chance to test it much, Azure & VM running cost associated . Anyways, I was waiting for this announcement and I was so much glad that Microsoft finally released the on premise one.  Check what’s new in the BizTalk Server 2013.  Officially Microsoft says that BizTalk Server 2013 “beta” is not supported on Windows 8 but I was curious to try it out. Below is my installation and configuration experience. Virtual Machine configuration: VM Ware Workstation 9.0. Windows 8 Enterprise x64. SQL Server 2012. Visual Studio 2012 Ultimate. BizTalk Server 2013 beta. Windows 8 Machine name: WIN8 Local Administrator account name: Admin First I installed Windows 8 Enterprise on a VM Ware Workstation 9.0 and updated the OS. Even Windows 8 is the new release so luckily didn’t had much updates to perform. Next Installed Visual Studio 2012 Ultimate which was straightforward installation. Next Installed SQL Server 2012. Select New SQL Server stand-alone installation & followed the steps as shown in the screenshot below.   Once the installation is finished, fire up SQL Server Management Studio and try connecting. Initially when the management studio opened up, I thought why did Visual Studio 2010 open when I tried opening SQL Management studio but well, they made the interface alike VS 2010. Cool, I like it. Next is the real deal, download the BizTalk Server 2013 and unzip to particular folder. Double click the Setup.exe and follow the steps in the screenshots. Install Microsoft BizTalk Server 2013 beta. I selected all the normal artifacts and also all the artifacts under Additional Software's. So far so good. Next Launch BizTalk Server Configuration and I used Basic configuration as shown in screenshot below. Didn’t expect to see this but “wala”. Successful in the first shot. Still I wasn’t sure & something would have gone wrong so fired up the BizTalk Server Administration Console and that too came up just fine. Still was not able to believe so created a simple messaging application:  message in –> message out and that too worked just fine. Finally I was convinced that BizTalk Server 2013 did work on Windows 8. Next step was to install the ESB Toolkit 2.2 which is now integrated with BizTalk Server and does not come as a separate standalone installation file. Again run the BizTalk Setup.exe from the unzipped folder. Install Microsoft ESB Toolkit. Next, unlike ESB Configuration would  not open up by itself so go to “Windows 8 so called Start” (I could not resist to write this) and open the ESB Toolkit Configuration wizard. Below screenshot display the configurations I used. Also you can find them on MSDN here. Finally after the ESB Configuration, I open Admin Console and checked the 2 ESB application deployed. Cool. This concludes my experience about installation and configuration of BizTalk Server 2013 Beta & ESB Toolkit 2.2 on Windows 8. I will try and keep writing about BizTalk Server 2013 and its use with RESTful Services etc. Thanks, Vishal Mody

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >