Search Results

Search found 17914 results on 717 pages for 'xml schema'.

Page 297/717 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • .NET: IOException for permissions when creating new file?

    - by Rosarch
    I am trying to create a new file and write XML to it: FileStream output = File.Create(Path.Combine(PATH_TO_DATA_DIR, fileName)); The argument evaluates to: C:\path\to\Data\test.xml The exception is: The process cannot access the file 'C:\path\to\Data\test.xml' because it is being used by another process. What am I doing wrong here? UPDATE: This code throws the same exception: StreamWriter writer = new StreamWriter(Path.Combine(PATH_TO_DATA_DIR, fileName)); UPDATE 2: The file I am trying to create does not exist in the file system. So how can be it in use?

    Read the article

  • How to pass a very long string/file into RESTWebservice JAX-RS Jersey

    - by Sashikiran Challa
    Hello All, I am trying to write a webservice that takes in an XML string, does parsing of it using DOM and extract particular things I want. My XML string happens to be very long so I do not want to pass it as a @QueryParam or @PathParam. Say If I write that XML string into a file, How do I go about writing a RESTful service that takes in this file, extracts whatever I want and return the results. I am actually trying to extract some number of strings, so my output should probably be an ArrayList having all these strings. Could somebody please shed some light on how I should go about doing this. Thanks in advance

    Read the article

  • csutom converter in XStream

    - by Raja Chandra Rangineni
    Hi All, I am using XStream to serialize my Objects to XML format. The formatted xml that I get is as below: node1, node2, node 3 are attributes of pojo,DetailDollars 100 25 10 I have requirement where in I have to calucluate a percentage, for example 100/ 25 and add the new node to the existing ones. So, the final output should be : 100 25 10 4 I wrote a custom converter and registered to my xstream object. public void marshal(..){ writer.startNode("node4"); writer.setValue(getNode1()/ getnode2() ); writer.endNode(); } But, the xml stream I get has only the new node: 4 I am not sure which xstream api would get me the desired format. could you please help me with this . Thanks, Raja chandra Rangineni.

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • C# .Net file in use issue

    - by Dan
    I'm having an issue opening files that have recently been closed by the .Net framework. Basically, what happens is the following: -Read in an XML file using DataSet.ReadXml() -Make some changes to the data -Write out the XML file using DataSet.WriteXml() -Copy the XML file to a new location using File.Copy -FTP the file using a custom control This sequence can intermittently fail either after the WriteXML or the File.Copy with a file in use exception. I'm guessing it could be the Windows write cache not flushing right away. Can anyone confirm that this could be causing my issue? Any solutions to suggest? Thanks, Dan

    Read the article

  • JSF2 and Richfaces 3.3.3 application on tomcat 6.0 crashes with a StackOverflowError

    - by Vivek Madapura V
    Hi, I am using JSF 2 and richfaces 3.3.3 for an application hosted on tomcat 6.0.20. The application crashes as soon as a request is made via the browser (Mozilla and IE). My web.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>TestJSF</display-name> <welcome-file-list> <welcome-file>pages/login.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> <context-param> <description>State saving method: 'client' or 'server' (=default). See JSF Specification 2.5.2</description> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.DISABLE_FACELET_JSF_VIEWHANDLER</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> </web-app> The exception is javax.servlet.ServletException: Servlet execution threw an exception org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) com.sun.faces.context.ExternalContextImpl.dispatch(ExternalContextImpl.java:542) com.sun.faces.application.view.JspViewHandlingStrategy.executePageToBuildView(JspViewHandlingStrategy.java:359) com.sun.faces.application.view.JspViewHandlingStrategy.buildView(JspViewHandlingStrategy.java:150) com.sun.faces.application.view.JspViewHandlingStrategy.renderView(JspViewHandlingStrategy.java:190) com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:127) org.ajax4jsf.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:100) org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:176) com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:117) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:97) com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:135) javax.faces.webapp.FacesServlet.service(FacesServlet.java:309) The stack trace is recursively logged with this until the StackOverflowError occurrs. If I remove all the configurations related to Richfaces, the application works like charm. Any advice is much appreciated.

    Read the article

  • JAXB, downcast an unmarshalled class

    - by elgcom
    I define two XML type (base and derived types). <xs:complexType name="ParentType"> ... </xs:complexType> <xs:complexType name="ChildType"> <xs:complexContent> <xs:extension base="ParentType"> ... </xs:complexContent> </xs:complexType> Now, I have an XML instance of ChildType. The question, If I once unmarshall the XML into ParentType java object, can I downcast the java object to its subclass of ChildType? Is that possible to that? thanks

    Read the article

  • COLUMNS_UPDATED() for audit triggers

    - by Piotr Rodak
    In SQL Server 2005, triggers are pretty much the only option if you want to audit changes to a table. There are many ways you can decide to store the change information. You may decide to store every changed row as a whole, either in a history table or as xml in audit table. The former case requires having a history table with exactly same schema as the audited table, the latter makes data retrieval and management of the table a bit tricky. Both approaches also suffer from the tendency to consume...(read more)

    Read the article

  • OracleServiceBus+SOA in same server

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM .For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get SOAand OSB running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.                     Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema  After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.We can also bundle Enterprise Manager in the same installation or in a different server. Here in this case we will use the enterprise manager in the same domain. So we selected the Enterprise Manager component also. There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below   After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA.  Since we would like to have both the servers in one managed server (one JVM) we will have to do one  

    Read the article

  • Rails Action and Fragment Caching during Development?

    - by viatropos
    I'm just starting to get into Rails caching and am wondering how to test whether or not caching is working in my development environment. I have set these two config variables for both the development (temporarily) and production environments: config.action_controller.perform_caching = true config.action_controller.page_cache_directory = File.join(RAILS_ROOT, 'public', 'cache') And my controller basically looks like this (using the resource_controller gem): class EventsController < Spree::BaseController resource_controller caches_page :index index.response do |format| format.html format.xml { render :xml => @collection.to_xml } end # ... end If I do that, I get these files: public/cache/events.html public/cache/events.xml But when I change caches_page to caches_action, I don't see any generated files and am not sure how to tell if the response has been cached. What do I need to know to know that this is working? I've read the docs and related, the rails caching guides, the railscast, and a few other docs, but I'm still wondering where everything is stored. Thanks!

    Read the article

  • RSS parsing last build Date. Fastest way to do so please.

    - by Paul
    Dim myRequest As System.Net.WebRequest = System.Net.WebRequest.Create(url) Dim myResponse As System.Net.WebResponse = myRequest.GetResponse() Dim rssStream As System.IO.Stream = myResponse.GetResponseStream() Dim rssDoc As New System.Xml.XmlDocument() Try rssDoc.Load(rssStream) Catch nosupport As NotSupportedException Throw nosupport End Try Dim rssItems As System.Xml.XmlNodeList = rssDoc.SelectNodes("rss/channel") 'For i As Integer = 0 To rssItems.Count - 1 Dim rssDetail As System.Xml.XmlNode rssDetail = rssItems.Item(0).SelectSingleNode("lastBuildDate") Folks this is what I'm using to parse an RSS feed for the last updated time. Is there a quicker way? Speed seems to be a bit slow on it as it pulls down the entire feed before parsing.

    Read the article

  • Poll Results: Foreign Key Constraints

    - by Darren Gosbell
    A few weeks ago I did the following post asking people – if they used foreign key constraints in their star schemas. The poll is still open if you are interested in adding to it, but here is what the chart looks like as of today. (at the bottom of the poll itself there is a link to the live results, unfortunately I cannot link the live results in here as the blogging platform blocks the required javascript)   Interestingly the results are fairly even. Of the 78 respondents, fractionally over half at least aim to start with referential integrity in their star schemas. I did not want to influence the results by sharing my opinion, but my personal preference is to always aim to have foreign key constraints. But at the same time, I am pragmatic about it, I do have projects where for various reasons some constraints are not defined. And I also have other designs that I have inherited, where it would just be too much work to go back and add foreign key constraints. If you are going to implement foreign keys in your star schema, they really need to be there at the start. In fact this poll was was the result of a feature request for BIDSHelper asking for a feature to check for null/missing foreign keys and I am entirely convinced that BIDS is the wrong place for this sort of functionality. BIDS is a design tool, your data needs to be constantly checked for consistency. It's not that I think that it's impossible to get a design working without foreign key constraints, but I like the idea of failing as soon as possible if there is an error and enforcing foreign key constraints lets me "fail early" if there are constancy issues with my data. By far the biggest concern with foreign keys is performance and I suppose I'm curious as to how often people actually measure and quantify this. I worked on a project a number of years ago that had very large data volumes and we did find that foreign key constraints did have a measurable impact, but what we did was to disable the constraints before loading the data, then enabled and checked them afterwards. This saved as time (although not as much as not having constraints at all), but still let us know early in the process if there were any consistency issues. For the people that do not have consistent data, if you have ETL processes that you control that are building your star schema which you also control, then to be blunt you only have yourself to blame. It is the job of the ETL process to make the data consistent. There are techniques for handling situations like missing data as well as  early and late arriving data. Ralph Kimball's book – The Data Warehouse Toolkit goes through some design patterns for handling data consistency. Having foreign key relationships can also help the relational engine to optimize queries as noted in this recent blog post by Boyan Penev

    Read the article

  • Account preferences crashes on ListPreference

    - by Sionide21
    I have created an account type using the AccountAuthenticator stuff as done in the SampleSyncAdapter tutorial. I am now trying to get account preferences working. I have added the line android:accountPreferences="@xml/account_preferences" to my account-authenticator and account_preferences.xml looks like so: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> <PreferenceCategory android:title="@string/alum_settings_title"/> <CheckBoxPreference android:key="sync_alum" android:title="@string/sync_alum" android:summaryOn="@string/sync_alum_check" android:summaryOff="@string/sync_alum_nocheck"/> <ListPreference android:key="sync_alum_since" android:title="@string/alum_years" android:entries="@array/years" android:entryValues="@array/years" android:dependency="sync_alum"/> </PreferenceScreen> The checkbox preference works exactly like it should but the ListPreference crashes the entire system with the following message: 05-14 22:32:16.794: ERROR/AndroidRuntime(63): android.view.WindowManager$BadTokenException: Unable to add window -- token null is not for an application I get the same error with EditTextPreference and with the custom subclass of DialogPreference I created.

    Read the article

  • custom converter in XStream

    - by Raja Chandra Rangineni
    Hi All, I am using XStream to serialize my Objects to XML format. The formatted xml that I get is as below: node1, node2, node 3 are attributes of pojo,DetailDollars DetailDollars node1 100 /node1 node2 25 /node2 node3 10 /node3 /DetailDollars I have requirement where in I have to calucluate a percentage, for example 100/ 25 and add the new node to the existing ones. So, the final output should be : DetailDollars node1 100 /node1 node2 25 /node2 node3 10 /node3 node4 4 /node4 DetailDollars I wrote a custom converter and registered to my xstream object. public void marshal(..){ writer.startNode("node4"); writer.setValue(getNode1()/ getnode2() ); writer.endNode(); } But, the xml stream I get has only the new node: DetailDollars node4 4 /node4 /DetailDollars I am not sure which xstream api would get me the desired format. could you please help me with this . Thanks, Raja chandra Rangineni.

    Read the article

  • SOA+OSB in same JVM

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM . For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get soa and osb running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA. Since we would like to have both the servers in one managed server (one JVM) we will have to do one important step here. We have to delete either of the servers and rename the other server with deleted server name.eg delete osb_server1 and rename the soa_server1 to osb_server1 or we can also delete soa_server1 and rename the osb_server1 to soa_server1After this steps proceed as as-usual . If we observe created domain we see only one managed server which contains components for both SOA and OSB ($DOMAIN_HOME/startManagedWebLogic_readme.txt). 

    Read the article

  • getting parameters from friendly url portlets jsr 286

    - by user265950
    link texthi, I am using ibm portal server. there is a link which is coming from external link. the url that is coming is as below http://localhost.us.deloitte.com:10040/wps/myportal/home/gm_assignee_label/gm_eoa_page?invoker=esb?agsnid=32984?asgnmtid=50085 home,gm_assignee_label,gm_eoa_page are friendly urls given to 3 different pages. things after the ? are the key value parameters. i want to retrieve these paramters when i click on the link above and my page gets loaded. i tried the below link as given by ibm. but it didnt help me my portlet.xml code is as below <?xml version="1.0" encoding="UTF-8"?> <portlet-app xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" id="com.ibm.faces.portlet.FacesPortlet.8b353a4492"> <portlet> <portlet-name>EndOfAssignmentPortlet</portlet-name> <display-name xml:lang="en">EndOfAssignmentPortlet</display-name> <display-name>EndOfAssignmentPortlet</display-name> <portlet-class>com.ibm.endofassignmentportlet.EndOfAssignmentPortlet</portlet-class> <init-param> <name>com.ibm.faces.portlet.page.view</name> <value>/view/endofassignment/EOASearchAssignment.jsp</value> </init-param> <init-param> <name>wps.markup</name> <value>html</value> </init-param> <init-param> <name>com.sun.faces.portlet.SAVE_REQUEST_SCOPE</name> <value>true</value> </init-param> <expiration-cache>0</expiration-cache> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> <portlet-mode>EDIT</portlet-mode> <portlet-mode>HELP</portlet-mode> </supports> <supported-locale>en</supported-locale> <resource-bundle> com.ibm.endofassignmentportlet.nl.EndOfAssignmentPortletResource</resource-bundle> <portlet-info> <title>EndOfAssignmentPortlet</title> <short-title>EndOfAssignmentPortlet</short-title> <keywords>EndOfAssignmentPortlet</keywords> </portlet-info> <supported-public-render-parameter>AssigneeID</supported-public-render-parameter> <supported-public-render-parameter>AssignmentID</supported-public-render-parameter> <supported-public-render-parameter>InvokerID</supported-public-render-parameter> </portlet> <default-namespace>http://EndOfAssignmentPortlet/</default-namespace> <public-render-parameter> <identifier>AssigneeID</identifier> <qname xmlns:x="http://localhost.us.deloitte.com:10040/wps/myportal">x:agsnid</qname> </public-render-parameter> <public-render-parameter> <identifier>AssignmentID</identifier> <qname xmlns:x="http://localhost.us.deloitte.com:10040/wps/myportal">x:asgnmtid</qname> </public-render-parameter> <public-render-parameter> <identifier>InvokerID</identifier> <qname xmlns:x="http://localhost.us.deloitte.com:10040/wps/myportal">x:invoker</qname> </public-render-parameter> </portlet-app> i am trying to get the values in my doView method of portlet as below String esbAssigneeID = request.getParameter("agsnid"); But i always get null. please help. TIA, Tejas

    Read the article

  • AJAX vs AHAH Is there a performance advantage?

    - by LanguaFlash
    My concern is performance, is there a reason to to send the client XML instead of valid HTML? Like most things, I am sure it is application dependent. My specific situation is where there is substantial content being inserted into the web page that has been pulled from a database. What are the advantages of either approach? Is the size of the content even a concern? Or, in the case of using XML, will the time for the Javascript to process the XML into HTML counterbalance the extra time that would have been required to send HTML to start with? Thanks, Jeff

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • The role of the Infrastructure DBA

    - by GavinPayneUK
    Do you have someone performing an Infrastructure DBA role within your organisation? Do you realise why today you now might need one? When I first started working with SQL Server there were three distinct roles in the SQL Server virtual team: developer , DBA and sysadmin . In my simple terms, the developer looked after the “code”: the schema, stored procedures, and any ETL to get data in, out or updated within the database. They could talk in business entity terms about Customer numbers, Product codes...(read more)

    Read the article

  • Create Zip file from stream and download it

    - by Navid Farhadi
    I have a DataTable that i want to convert it to xml and then zip it, using DotNetZip. finally user can download it via Asp.Net webpage. My code in below dt.TableName = "Declaration"; MemoryStream stream = new MemoryStream(); dt.WriteXml(stream); ZipFile zipFile = new ZipFile(); zipFile.AddEntry("Report.xml", "", stream); Response.ClearContent(); Response.ClearHeaders(); Response.AppendHeader("content-disposition", "attachment; filename=Report.zip"); zipFile.Save(Response.OutputStream); //Response.Write(zipstream); zipFile.Dispose(); the xml file in zip file is empty.

    Read the article

  • Entity Framework and the XmlIgnoreAttribute

    - by Mikey Cee
    Say you have a one to one relationship in your entity model. The code generator will decorate it with the following attributes: [global::System.Xml.Serialization.XmlIgnoreAttribute()] [global::System.Xml.Serialization.SoapIgnoreAttribute()] public RelatedObject Relationship { get {...} set {...} } I want to serialize my parent object together with all its properties for which data has been loaded through an XML web service. Obviously, these related properties do not get serialized because of these attributes. So for my purposes I just want to remove these "don't serialize me" attributes. I can do a find and replace in the designer code, but any modifications I make in the designer will put these attributes back in. How do I permanently get rid of these attributes? VS 2008 / EF 3.5.

    Read the article

  • Would you expect this error ?

    - by GrumpyOldDBA
    Now I know why, but what I'm thinking is that if I create an error should I get valid data returned? To explain, I was browsing through the dmvs for queries which might benefit from tuning and I identified a query with two clustered index scans ( table scans ). I don't know all the schema off by heart and I was looking for a select by a LoginID column. I assumed this would be numeric and promptly entered an integer value to examine the query plan, yeah I should have looked at the table definition...(read more)

    Read the article

  • What strategy to use when starting in a new project with no documentation?

    - by Amir Rezaei
    Which is the best why to go when there are no documentation? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >