Search Results

Search found 4849 results on 194 pages for 'schema migration'.

Page 69/194 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • PHP SOAP error: Method element needs to belong to the namespace

    - by kdm
    I'm unable to retrieve data from an XML document, any help is greatly appreciated. I'm using PHP 5.2.10 and the WSDL url is an internal link within my company. The following code produces an error. $url = "http://dta-info/IVR/IVRINFO?WSDL"; $params = array("zANI" => "12345"); try{ $client = new SoapClient($url, array( 'trace' => 1, 'connection_timeout' => 2, 'location' => $url ) ); }catch(SoapFault $fault){ echo "faultstring: {$fault->faultstring})\n"; } try{ $result = $client->GetIVRinfo($params); }catch(SoapFault $fault){ echo "(faultcode: {$fault->faultcode}, faultstring: {$fault->faultstring} )\n"; } (faultcode: SOAP-ENV:Client, faultstring: There should be no path or parameters after a SOAP vname. ) So i tried to use a non-wsdl mode but i receive a different error no matter how i try to format the params. $url = "http://dta-info/IVR/IVRINFO"; $params = array("zANI" => "12345"); try{ $client = new SoapClient(null, array( 'trace' => 1, 'connection_timeout' => 2, 'location' => $url, 'uri' => $uri, 'style' => SOAP_DOCUMENT, 'use' => SOAP_LITERAL, 'soap_version' => SOAP_2 ) ); }catch(SoapFault $fault){ echo "faultstring: {$fault->faultstring})\n"; } try{ $result = $client->GetIVRinfo($params); }catch(SoapFault $fault){ echo "(faultcode: {$fault->faultcode}, faultstring: {$fault->faultstring} )\n"; } (faultcode: SOAP-ENV:Client, faultstring: The method element needs to belong to the namespace 'http://GETIVRINFO/IVR/IVRINFO'. ) I have tested this WSDL with a tool called SoapUI and it returns the results with no errors. So it leads me to believe I'm not formatting the vars or headers correctly with PHP. I also tried passing in a xml fragment as the param but that returns the same error. What am i doing wrong?????? $params = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ivr="http://GETIVRINFO/IVR/IVRINFO"> <soapenv:Header/> <soapenv:Body> <ivr:GetIVRinfo> <!--Optional:--> <ivr:zANI>12345</ivr:zANI> </ivr:GetIVRinfo> </soapenv:Body> </soapenv:Envelope>'; Here is the WSDL document: <?xml version="1.0"?><wsdl:definitions name="IVR" targetNamespace="http://GETIVRINFO/IVR/IVRINFO" xmlns:tns="http://GETIVRINFO/IVR/IVRINFO" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:sql="http://schemas.microsoft.com/SQLServer/2001/12/SOAP" xmlns:sqltypes="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types" xmlns:sqlmessage="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage" xmlns:sqlresultstream="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream"> <wsdl:types><xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:simpleType name='nonNegativeInteger'> <xsd:restriction base='xsd:int'> <xsd:minInclusive value='0'/> </xsd:restriction> </xsd:simpleType> <xsd:attribute name='IsNested' type='xsd:boolean'/> <xsd:complexType name='SqlRowSet'> <xsd:sequence> <xsd:element ref='xsd:schema'/> <xsd:any/> </xsd:sequence> <xsd:attribute ref='sqltypes:IsNested'/> </xsd:complexType> <xsd:complexType name='SqlXml' mixed='true'> <xsd:sequence> <xsd:any/> </xsd:sequence> </xsd:complexType> <xsd:simpleType name='SqlResultCode'> <xsd:restriction base='xsd:int'> <xsd:minInclusive value='0'/> </xsd:restriction> </xsd:simpleType> </xsd:schema> <xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types'/> <xsd:complexType name='SqlMessage'> <xsd:sequence minOccurs='1' maxOccurs='1'> <xsd:element name='Class' type='sqltypes:nonNegativeInteger'/> <xsd:element name='LineNumber' type='sqltypes:nonNegativeInteger'/> <xsd:element name='Message' type='xsd:string'/> <xsd:element name='Number' type='sqltypes:nonNegativeInteger'/> <xsd:element name='Procedure' type='xsd:string'/> <xsd:element name='Server' type='xsd:string'/> <xsd:element name='Source' type='xsd:string'/> <xsd:element name='State' type='sqltypes:nonNegativeInteger'/> </xsd:sequence> <xsd:attribute ref='sqltypes:IsNested'/> </xsd:complexType> </xsd:schema> <xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage'/> <xsd:complexType name='SqlResultStream'> <xsd:choice minOccurs='1' maxOccurs='unbounded'> <xsd:element name='SqlRowSet' type='sqltypes:SqlRowSet'/> <xsd:element name='SqlXml' type='sqltypes:SqlXml'/> <xsd:element name='SqlMessage' type='sqlmessage:SqlMessage'/> <xsd:element name='SqlResultCode' type='sqltypes:SqlResultCode'/> </xsd:choice> </xsd:complexType> </xsd:schema> <xsd:schema targetNamespace="http://GETIVRINFO/IVR/IVRINFO" elementFormDefault="qualified" attributeFormDefault="qualified"> <xsd:import namespace="http://www.w3.org/2001/XMLSchema"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream"/> <xsd:element name="GetIVRinfo"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="zANI" type="xsd:string" nillable="true"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="GetIVRinfoResponse"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="1" maxOccurs="1" name="GetIVRinfoResult" type="sqlresultstream:SqlResultStream"/> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> </wsdl:types> <wsdl:message name="GetIVRinfoIn"> <wsdl:part name="parameters" element="tns:GetIVRinfo"/> </wsdl:message> <wsdl:message name="GetIVRinfoOut"> <wsdl:part name="parameters" element="tns:GetIVRinfoResponse"/> </wsdl:message> <wsdl:portType name="SXSPort"> <wsdl:operation name="GetIVRinfo"> <wsdl:input message="tns:GetIVRinfoIn"/> <wsdl:output message="tns:GetIVRinfoOut"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name="SXSBinding" type="tns:SXSPort"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <wsdl:operation name="GetIVRinfo"> <soap:operation soapAction="http://GETIVRINFO/IVR/IVRINFO/GetIVRinfo" style="document"/> <wsdl:input> <soap:body use="literal"/> </wsdl:input> <wsdl:output> <soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="IVR"> <wsdl:port name="SXSPort" binding="tns:SXSBinding"> <soap:address location="http://GETIVRINFO/IVR/IVRINFO"/> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • db:migrate creates sequences but doesn't alter table?

    - by RewbieNewbie
    Hello, I have a migration that creates a postres sequence for auto incrementing a primary identifier, and then executes a statement for altering the column and specifying the default value: execute 'CREATE SEQUENCE "ServiceAvailability_ID_seq";' execute <<-SQL ALTER TABLE "ServiceAvailability" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceAvailability_ID_seq'); SQL If I run db:migrate everything seems to work, in that no errors are returned, however, if I run the rails application I get: Mnull value in column "ID" violates not-null constraint I have discovered by executing the sql statement in the migration manually, that this error is because the alter statement isn't working, or isn't being executed. If I manually execute the following statement: CREATE SEQUENCE "ServiceAvailability_ID_seq; I get: error : ERROR: relation "serviceavailability_id_seq" already exists Which means the migration successfully created the sequence! However, if I manually run: ALTER TABLE "ServiceProvider" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceProvider_ID_seq'); SQL It runs successfully and creates the default NEXTVAL. So the question is, why is the migration file creating the sequence with the first execute statement, but not altering the table in the second execute? (Remembering, no errors are output on running db:migrate) Thank you and apologies for tl:dr

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • activemq-maven-plugin ignore files in classpath?

    - by Oscar Chan
    I have been trying to get activemq-maven-plugin to run activemq with configuration in classpath of the bundle. However, I don't have much luck. It seems that the activemq-maven-plugin just ignore resources (resources/main/conf/activemq.properties) the local bundle. I checked the jar and target/classes and they are built into the right local. I am able to get plugin to run (mvn activemq:run) if I take out the PropertyPlaceholderConfigurer bean in the activemq.xml Did I do anything wrong? Here is the output [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to start ActiveMQ Broker Embedded error: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [conf/activemq.properties] cannot be opened because it does not exist [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Mon May 03 15:56:05 PDT 2010 [INFO] Final Memory: 11M/79M [INFO] ------------------------------------------------------------------------ Here is the pom.xml, which I specific the plugin to look up activemq.xml via file, that works. However, in the activemq.xml <?xml version="1.0"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>oc.test</groupId> <artifactId>mq</artifactId> <version>0.1</version> <name>mq</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.activemq.tooling</groupId> <artifactId>maven-activemq-plugin</artifactId> <version>5.3.1</version> <configuration> <configUri>xbean:file:src/main/resources/conf/activemq.xml</configUri> <fork>false</fork> <systemProperties> <property> <name>javax.net.ssl.keyStorePassword</name> <value>password</value> </property> <property> <name>org.apache.activemq.default.directory.prefix</name> <value>./target/</value> </property> </systemProperties> </configuration> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.5</version> </dependency> </dependencies> </plugin> </plugins> </build> </project> Here is the src/main/resources/conf/activemq.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd "> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>classpath:conf/activemq.properties</value> </property> <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/> </bean> <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="./data"> <!-- The transport connectors ActiveMQ will listen to --> <transportConnectors> <transportConnector name="openwire" uri="tcp://localhost:61616"/> </transportConnectors> </broker> </beans> Here is the src/main/resources/conf/activemq.properties activemq.port=61616

    Read the article

  • Dependency Injection with Spring/Junit/JPA

    - by Steve
    I'm trying to create JUnit tests for my JPA DAO classes, using Spring 2.5.6 and JUnit 4.8.1. My test case looks like this: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:config/jpaDaoTestsConfig.xml"} ) public class MenuItem_Junit4_JPATest extends BaseJPATestCase { private ApplicationContext context; private InputStream dataInputStream; private IDataSet dataSet; @Resource private IMenuItemDao menuItemDao; @Test public void testFindAll() throws Exception { assertEquals(272, menuItemDao.findAll().size()); } ... Other test methods ommitted for brevity ... } I have the following in my jpaDaoTestsConfig.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- uses the persistence unit defined in the META-INF/persistence.xml JPA configuration file --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="CONOPS_PU" /> </bean> <bean id="groupDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao" lazy-init="true" /> <bean id="permissionDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.PermissionDao" lazy-init="true" /> <bean id="applicationUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ApplicationUserDao" lazy-init="true" /> <bean id="conopsUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ConopsUserDao" lazy-init="true" /> <bean id="menuItemDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.MenuItemDao" lazy-init="true" /> <!-- enables interpretation of the @Required annotation to ensure that dependency injection actually occures --> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- transaction manager for use with a single JPA EntityManagerFactory for transactional data access to a single datasource --> <bean id="jpaTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <!-- enables interpretation of the @Transactional annotation for declerative transaction managment using the specified JpaTransactionManager --> <tx:annotation-driven transaction-manager="jpaTransactionManager" proxy-target-class="false"/> </beans> Now, when I try to run this, I get the following: SEVERE: Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@fa60fa6] to prepare test instance [null(mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest)] org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest': Injection of resource fields failed; nested exception is java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:959) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireBeanProperties(AbstractAutowireCapableBeanFactory.java:329) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:110) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:255) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:93) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:130) at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:61) at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:54) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:159) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.(PersistenceAnnotationBeanPostProcessor.java:559) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$1.doWith(PersistenceAnnotationBeanPostProcessor.java:359) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:492) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:469) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:351) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:745) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:448) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(AccessController.java:219) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:221) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:168) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:435) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:409) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:537) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:180) at org.springframework.beans.factory.annotation.InjectionMetadata.injectFields(InjectionMetadata.java:105) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:289) ... 18 more It seems to be telling me that its attempting to store an EntityManager object into an EntityManagerFactory field, but I don't understand how or why. My DAO classes accept both an EntityManager and EntityManagerFactory via the @PersistenceContext attribute, and they work find if I load them up and run them without the @ContextConfiguration attribute (i.e. if I just use the XmlApplcationContext to load the DAO and the EntityManagerFactory directly in setUp ()). Any insights would be appreciated. Thanks. --Steve

    Read the article

  • Any good PostgreSQL client for linux?

    - by senotrusov
    stackoverflow points me "belongs-on-serverfault" on this, so crossposting. I am frustrated of not having a good Linux GUI administration and development tool for PostgreSQL. pgAdmin III is buggy and unusable piece of... hmm, software, compared to Windows-only PostgreSQL Maestro and EMS PostgreSQL manager. phpPgaAmin does not looks promising. EMS PostgreSQL manager can work under Wine, but such setup have a number of issues. Requirements are: Table data editing and browsing for large tables (1M+), able to jump by FK or some master-slave editing, GUI filtering and so on. ER diagrams with in-place schema editing Schema editing and browsing with all useful GUI support Schema changes log to put into DB versioning (migrations script). Tabbed interface to be able to work with a number of tables and SQL queries at once. And so on. Any ideas?

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • PAM module for authentication by IP or other password-disabling module

    - by Robin Rosenberg
    I'm looking for a Linux pam module that accepts any password for connections from a specific IP. I don't want to disable passwords completely. I need it for migration from one imap server to another (cyrus to zimbra) without knowing every password. I used such a module some six years ago. That was for imap migration too. Unfortunately I cannot recall the name of the module and can't find it by other means either. Any pointers?

    Read the article

  • How can I set an arbitrary (non default) attribute for an AD user or AD Contact?

    - by makerofthings7
    I have AD Users, or contacts that are not Exchange Mailbox users, or contacts. I also have a SSO system (Ping Identity... technology similar to Microsoft ADFS), where it leverages the AD Schema attribute: CustomAttribute1 to store information needed for SSO. This CustomAttribute1 was created by the Exchange Schema. I would like to use CustomAttribute1 for both AD Users and AD Contacts, as well as the Exchange equivalent user and contacts. Question Since the Exchange tools will only allow me to modify "Exchange" users, what is the way to modify the AD counterpart? e.g. if the following command sets a mailbox... set-mailbox -Identity [email protected] -CustomAttribute1 [email protected] -WarningAction silentlyContinue What command will allow me to update an AD user (non-mailbox) under the same schema attribute?

    Read the article

  • How can I migrate all keyboard shortcuts from one mac to another?

    - by cwd
    I have a lot of custom keyboard shortcuts and will be migration Macs. I tested Migration Assistant and it did not seem to get these. I read somewhere that they are stored in the individual application's plist files in the ~/Library/Application Support folder but even after copying a few of these folders over the shortcuts do not seem to follow. How can I get all of the keyboard shortcuts migrated to a new mac?

    Read the article

  • SSH + SAMBA + LDAP question

    - by Mejmo
    Hi, I have SSH + LDAP working (I can log to Server2 with credentials from LDAP server Server1). Now, I would like to add Samba server (Server3) and it would be nice if it authenticates the users like Server2. How can I achieve this ? As I see Samba schema and the schema used for storing Unix users are different. So if I change password in Samba schema, I would be able to log in with the old password. I need centralized storage of username/passwords. If I change it once in phpldapadmin, it means for samba and ssh. Thanks.

    Read the article

  • Entity Framework and multi-tenancy database design

    - by Junto
    I am looking at multi-tenancy database schema design for an SaaS concept. It will be ASP.NET MVC - EF, but that isn't so important. Below you can see an example database schema (the Tenant being the Company). The CompanyId is replicated throughout the schema and the primary key has been placed on both the natural key, plus the tenant Id. Plugging this schema into the Entity Framework gives the following errors when I add the tables into the Entity Model file (Model1.edmx): The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Product' uses the set of foreign keys '{ProductId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The question is in two parts: Is my database design incorrect? Should I refrain from these compound primary keys? I'm questioning my sanity regarding the fundamental schema design (frazzled brain syndrome). Please feel free to suggest the 'idealized' schema. Alternatively, if the database design is correct, then is EF unable to match the keys because it perceives these foreign keys as a potential mis-configured 1:1 relationships (incorrectly)? In which case, is this an EF bug and how can I work around it?

    Read the article

  • Spring security ldap: no declaration can be found for element 'ldap-authentication-provider'

    - by wuntee
    Following the spring-security documentation: http://static.springsource.org/spring-security/site/docs/3.0.x/reference/ldap.html I am trying to set up ldap authentication (very simple - just need to know if a user is authenticated or not, no authorities mapping needed) and have put this in my applicationContext-security.xml file <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> ... <ldap-server url="ldap://adapps.company.com:389/dc=company,dc=com" /> <ldap-authentication-provider user-search-filter="(samaccountname={0})" user-search-base="dc=company,dc=com"/> The problem I run into is that it doesnt seem like ldap-authentication-provider; I fell like i may be missing some configuration isn the beans definition. The error I get when trying to run the application is: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 27 in XML document from ServletContext resource [/WEB-INF/rvaContext-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:465) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:395) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4467) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:722) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:593) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:236) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:172) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:382) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:316) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:429) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3185) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1955) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:725) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:322) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1693) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:368) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:834) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:148) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:250) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) ... 28 more Can anyone see what im missing? Also, is that all I need to add to the security bean in order to authenticate against ldap?

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • JMS messaging implementation

    - by Gandalf StormCrow
    I've been struggling with this "simple" task for more expirienced people, I'm stuck for 2 days now need help. I've changed things arround like zillion times now, finally I stumbled upon this spring JMS tutorial. What I want to do, Send a message and receive it. I've been also reading this book chapter 8 on messaging. It really nicely explains 2 type of messaging and there is nice example for publish-and-subscribe type but now example for point-to-point messaging( this is the one I need). I'm able to send message to the queue on my own, but don't have a clue how to receive thats why I tried with this spring tutorial here is what I've got so far : SENDER : package quartz.spring.com.example; import java.util.HashMap; import java.util.Map; import javax.jms.ConnectionFactory; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.Queue; import javax.jms.Session; import org.springframework.jms.core.MessageCreator; import org.springframework.jms.core.JmsTemplate; import org.springframework.jms.core.JmsTemplate102; import org.springframework.jms.core.MessagePostProcessor; public class JmsQueueSender { private JmsTemplate jmsTemplate; private Queue queue; public void setConnectionFactory(ConnectionFactory cf) { this.jmsTemplate = new JmsTemplate102(cf, false); } public void setQueue(Queue queue) { this.queue = queue; } public void simpleSend() { this.jmsTemplate.send(this.queue, new MessageCreator() { public Message createMessage(Session session) throws JMSException { return session.createTextMessage("hello queue world"); } }); } public void sendWithConversion() { Map map = new HashMap(); map.put("Name", "Mark"); map.put("Age", new Integer(47)); jmsTemplate.convertAndSend("testQueue", map, new MessagePostProcessor() { public Message postProcessMessage(Message message) throws JMSException { message.setIntProperty("AccountID", 1234); message.setJMSCorrelationID("123-00001"); return message; } }); } } RECEIVER : package quartz.spring.com.example; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.TextMessage; public class ExampleListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage) { try { System.out.println(((TextMessage) message).getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException("Message must be of type TextMessage"); } } } applicationcontext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd"> <bean id="sender" class="quartz.spring.com.example.JmsQueueSender" init-method="sendWithConversion" /> <bean id="receiver" class="quartz.spring.com.example.ExampleListener" init-method="onMessage" /> </beans> Didn't really know that learning curve for this is so long, I mean the idea is very simple: Send message to the destination queue Receive message from the destination queue To receive messages, you do the following(so does book say): 1 Locate a ConnectionFactory, typically using JNDI. 2 Use the ConnectionFactory to create a Connection. 3 Use the Connection to create a Session. 4 Locate a Destination, typically using JNDI. 5 Use the Session to create a MessageConsumer for that Destination. Once you’ve done this, methods on the MessageConsumer enable you to either query the Destination for messages or to register for message notification. Can somebody please direct me towards right direction, is there a tutorial which explains in details how to receive message from the queue?I have the working send message code, didn't post it here because this post is too long as it is.

    Read the article

  • Custom Filter Problem?

    - by mr.lost
    greetings all,iam using spring security 3 and i want to perform some logic(saving some data in the session) when the user is visiting the site and he's remembered so i extended the GenericFilterBean class and performed the logic in the doFilter method then complete the filter chain by calling the chain.doFilter method,and then inserted that filter after the remember me filter in the security.xml file? but there's a problem is the filter is executed on each page even if the user is remembered or not is there's something wrong with the filter implementation or the position of the filter? and i have a simple question,is the filter chain by default is executed on each page? and when making a custom filter should i add it to the web.xml too? the filter class: package projects.internal; import java.io.IOException; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.Authentication; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.web.filter.GenericFilterBean; import projects.ProjectManager; public class rememberMeFilter extends GenericFilterBean { private ProjectManager projectManager; @Autowired public rememberMeFilter(ProjectManager projectManager) { this.projectManager = projectManager; } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { System.out.println("In The Filter"); Authentication auth = (Authentication) SecurityContextHolder .getContext().getAuthentication(); HttpServletResponse response = ((HttpServletResponse) res); HttpServletRequest request = ((HttpServletRequest) req); // if the user is not remembered,do nothing if (auth == null) { chain.doFilter(request, response); } else { // the user is remembered save some data in the session System.out.println("User Is Remembered"); chain.doFilter(request, response); } } } the security.xml file: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" > <remember-me data-source-ref="dataSource"/> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/forgot_password" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <custom-filter after="REMEMBER_ME_FILTER" ref="rememberMeFilter" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login" authentication-failure-url="/p/login?login_error=1" default-target-url="/p/dashboard" authentication-success-handler-ref="myAuthenticationHandler" always-use-default-target="false" /> <logout/> </http> <beans:bean id="myAuthenticationHandler" class="projects.internal.myAuthenticationHandler" /> <beans:bean id="rememberMeFilter" class="projects.internal.rememberMeFilter" > </beans:bean> <authentication-manager alias="authenticationManager"> <authentication-provider> <password-encoder hash="md5" /> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> </authentication-manager> </beans:beans> any help?

    Read the article

  • Spring Custom Filter Problem?

    - by mr.lost
    greetings all,iam using spring security 3 and i want to perform some logic(saving some data in the session) when the user is visiting the site and he's remembered so i extended the GenericFilterBean class and performed the logic in the doFilter method then complete the filter chain by calling the chain.doFilter method,and then inserted that filter after the remember me filter in the security.xml file? but there's a problem is the filter is executed on each page even if the user is remembered or not is there's something wrong with the filter implementation or the position of the filter? and i have a simple question,is the filter chain by default is executed on each page? and when making a custom filter should i add it to the web.xml too? the filter class: package projects.internal; import java.io.IOException; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.Authentication; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.web.filter.GenericFilterBean; import projects.ProjectManager; public class rememberMeFilter extends GenericFilterBean { private ProjectManager projectManager; @Autowired public rememberMeFilter(ProjectManager projectManager) { this.projectManager = projectManager; } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { System.out.println("In The Filter"); Authentication auth = (Authentication) SecurityContextHolder .getContext().getAuthentication(); HttpServletResponse response = ((HttpServletResponse) res); HttpServletRequest request = ((HttpServletRequest) req); // if the user is not remembered,do nothing if (auth == null) { chain.doFilter(request, response); } else { // the user is remembered save some data in the session System.out.println("User Is Remembered"); chain.doFilter(request, response); } } } the security.xml file: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" > <remember-me data-source-ref="dataSource"/> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/forgot_password" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <custom-filter after="REMEMBER_ME_FILTER" ref="rememberMeFilter" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login" authentication-failure-url="/p/login?login_error=1" default-target-url="/p/dashboard" authentication-success-handler-ref="myAuthenticationHandler" always-use-default-target="false" /> <logout/> </http> <beans:bean id="myAuthenticationHandler" class="projects.internal.myAuthenticationHandler" /> <beans:bean id="rememberMeFilter" class="projects.internal.rememberMeFilter" > </beans:bean> <authentication-manager alias="authenticationManager"> <authentication-provider> <password-encoder hash="md5" /> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> </authentication-manager> </beans:beans> any help?

    Read the article

  • How to use JAXB to process messages from two separate schemas (with same rootelement name)

    - by sairn
    Hi We have a client that is sending xml messages that are produced from two separate schemas. We would like to process them in a single application, since the content is related. We cannot modify the client's schema that they are using to produce the XML messages... We can modify our own copies of the two schema (or binding.jxb) -- if it helps -- in order to enable our JAXB processing of messages created from the two separate schemas. Unfortunately, both schemas have the same root element name (see below). QUESTION: Does JAXB prohibit absolutely the processing two schemas that have the same root element name? -- If so, I will stop my "easter egg" hunt for a solution to this... ---Or, is there some workaround that would enable us to use JAXB for processing these XML messages produced from two different schemas? schema1 (note the root element name: "A"): <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="AA"> <xsd:complexType> <xsd:sequence> <xsd:element name="AAA1" type="xsd:string" /> <xsd:element name="AAA2" type="xsd:string" /> <xsd:element name="AAA3" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="BB"> <xsd:complexType> <xsd:sequence> <xsd:element name="BBB1" type="xsd:string" /> <xsd:element name="BBB2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> schema2 (note, again, using the same root element name: "A") <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="CCC"> <xsd:complexType> <xsd:sequence> <xsd:element name="DDD1" type="xsd:string" /> <xsd:element name="DDD2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE"> <xsd:complexType> <xsd:sequence> <xsd:element name="EEE1"> <xsd:complexType> <xsd:sequence> <xsd:element name="FFF1" type="xsd:string" /> <xsd:element name="FFF2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>

    Read the article

  • How do I create statistics to make ‘small’ objects appear ‘large’ to the Optmizer?

    - by Maria Colgan
    I recently spoke with a customer who has a development environment that is a tiny fraction of the size of their production environment. His team has been tasked with identifying problem SQL statements in this development environment before new code is released into production. The problem is the objects in the development environment are so small, the execution plans selected in the development environment rarely reflects what actually happens in production. To ensure the development environment accurately reflects production, in the eyes of the Optimizer, the statistics used in the development environment must be the same as the statistics used in production. This can be achieved by exporting the statistics from production and import them into the development environment. Even though the underlying objects are a fraction of the size of production, the Optimizer will see them as the same size and treat them the same way as it would in production. Below are the necessary steps to achieve this in their environment. I am using the SH sample schema as the application schema who's statistics we want to move from production to development. Step 1. Create a staging table, in the production environment, where the statistics can be stored Step 2. Export the statistics for the application schema, from the data dictionary in production, into the staging table Step 3. Create an Oracle directory on the production system where the export of the staging table will reside and grant the SH user the necessary privileges on it. Step 4. Export the staging table from production using data pump export Step 5. Copy the dump file containing the stating table from production to development Step 6. Create an Oracle directory on the development system where the export of the staging table resides and grant the SH user the necessary privileges on it.  Step 7. Import the staging table into the development environment using data pump import Step 8. Import the statistics from the staging table into the dictionary in the development environment. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Does OSB has any database dependency?

    - by Manoj Neelapu
    Major functionality of OSB is database independent. Most of the internal data-structures that re required by OSB are stored in-memory.Reporting functionality of OSB requires DB tables be accessible.http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15017/before.htm#BABCJHDJ It should hover be noted that we can still run OSB with out creating any tables on database.In such cases the reporting functionality cannot be used where as other functions in OSB will work just as fine.We also see few errors in the log file indicating the absence of these tables which we can ignore.  If reporting function is required we will have to install few tables. http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15017/before.htm#BABBBEHD indicates running RCU recommended. OSB reporting tables are bundled along with SOA schema in RCU. OSB requires two simple tables for reporting functionality and installing complete SOA schema is little far fetched. SOA schema contains lot of tables which OSB doesn't require at all. More over OSB tables are too simple to require a tool like an RCU.Solution to it would be to manually create those tables required for OSB. To make  life easier the definition of tables is available in dbscripts folder under OSB_HOME.eg. D:\Oracle\Middleware\osb\11gPS2\Oracle_OSB1\dbscripts. $OSB_HOME=D:\Oracle\Middleware\osb\11gPS2\Oracle_OSB1If you are not planning to use reporting feature in OSB, then we can also delete the JDBC data sources that comes along with standard OSB domain.WLST script to delete cgDataSources from OSB domain . OSB will work fine with out DB tables and JDBC Datasource.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >