Search Results

Search found 3888 results on 156 pages for 'star schema'.

Page 35/156 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • What does a well formed XML or Schema look like for InfoPath form creation?

    - by Keith Sirmons
    Howdy, Are there any resources out there that defines how a well formed XML or Schema may look like for InfoPath? When designing a new form, there is an option to base the new form off of an existing XML document or XML Schema as the data source. I am looking for any guidelines or rules that will help me make sure the structure of the XML file I use to create the form will work the best it could work. I am creating the XML structure for another project, but we want to make sure the XML that is created would be InfoPath friendly for possible future applications. Thank you, Keith

    Read the article

  • In symfony/doctrine's schema.yml, where should I put onDelete: CASCADE for a many-to-many relationsh

    - by nselikoff
    I have a many-to-many relationship defined in my Symfony (using doctrine) project between Orders and Upgrades (an Order can be associated with zero or more Upgrades, and an Upgrade can apply to zero or more Orders). # schema.yml Order: columns: order_id: {...} relations: Upgrades: class: Upgrade local: order_id foreign: upgrade_id refClass: OrderUpgrade Upgrade: columns: upgrade_id: {...} relations: Orders: class: Order local: upgrade_id foreign: order_id refClass: OrderUpgrade OrderUpgrade: columns: order_id: {...} upgrade_id: {...} I want to set up delete cascade behavior so that if I delete an Order or an Upgrade, all of the related OrderUpgrades are deleted. Where do I put onDelete: CASCADE? Usually I would put it at the end of the relations section, but that would seem to imply in this case that deleting Orders would cascade to delete Upgrades. Is Symfony + Doctrine smart enough to know what I'm wanting if I put onDelete: CASCADE in the above relations sections of schema.yml?

    Read the article

  • Forum board example schema in YAML format - modify for Nested set?

    - by takeshin
    I have created a forum board app, based on YAML schema found in 'real world examples' of Doctrine Manual, which looks similar to this: --- Forum_Category: columns: root_category_id: integer(10) parent_category_id: integer(10) name: string(50) description: string(99999) relations: Subcategory: class: Forum_Category local: parent_category_id foreign: id Rootcategory: class: Forum_Category local: root_category_id foreign: id Forum_Board: columns: category_id: integer(10) name: string(100) description: string(5000) relations: Category: class: Forum_Category local: category_id foreign: id Threads: class: Forum_Thread local: id foreign: board_id Forum_Entry: columns: author: string(50) topic: string(100) message: string(99999) parent_entry_id: integer(10) thread_id: integer(10) date: integer(10) relations: Parent: class: Forum_Entry local: parent_entry_id foreign: id Thread: class: Forum_Thread local: thread_id foreign: id Forum_Thread: columns: board_id: integer(10) updated: integer(10) closed: integer(1) relations: Board: class: Forum_Board local: board_id foreign: id Entries: class: Forum_Entry local: id foreign: thread_id How to modify this schema, to use NestedSet (Tree structure of threads and entries)?

    Read the article

  • How to version SQL Server schema using VS 2005?

    - by Mike
    I am new to C# programming and am coming to it most recently from working with Ruby on Rails. In RoR, I am used to being able to write schema migrations for the database. I would like to be able to do something similar for my C#/SQLServer projects. Does such a tool exist for the VS 2005 toolset? Would it be wise to use RoR migrations with SQL Server directly outside of VS 2005? In other words, I would handle all schema versioning using ActiveRecord:Migration from Rails but nothing else. If I do handle migrations outside of C# and VS 2005 with another tool, is RoR ActiveRecord:Migration the best thing to use or is there something which is a better fit?

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • Is is possible to hide label colors in list view in Mac OS 10.X Finder? I use labels as star ratings

    - by Andrew Swift
    I have modified the first five label names in OS X Lion to be ? through ?????. This way I can easily tag photos etc. in the finder. However, I find that Apple's rendering of the colored bars is horrendously ugly. I'd like to keep the labels (to be able to sort by label) but not see the colors. I know it was possible to change the colors with Label X, but Unsanity has not issued an update for Lion. I don't need to change the colors, just find a way to hide them in the finder.

    Read the article

  • Quartz + Spring double execution on startup

    - by Osy
    I have Quartz 2.2.1 and Spring 3.2.2. app on Eclipse Juno This is my bean configuration: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <!-- Spring Quartz --> <bean id="checkAndRouteDocumentsTask" class="net.tce.task.support.CheckAndRouteDocumentsTask" /> <bean name="checkAndRouteDocumentsJob" class="org.springframework.scheduling.quartz.JobDetailFactoryBean"> <property name="jobClass" value="net.tce.task.support.CheckAndRouteDocumentsJob" /> <property name="jobDataAsMap"> <map> <entry key="checkAndRouteDocumentsTask" value-ref="checkAndRouteDocumentsTask" /> </map> </property> <property name="durability" value="true" /> </bean> <!-- Simple Trigger, run every 30 seconds --> <bean id="checkAndRouteDocumentsTaskTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerFactoryBean"> <property name="jobDetail" ref="checkAndRouteDocumentsJob" /> <property name="repeatInterval" value="30000" /> <property name="startDelay" value="15000" /> </bean> <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean"> <property name="jobDetails"> <list> <ref bean="checkAndRouteDocumentsJob" /> </list> </property> <property name="triggers"> <list> <ref bean="checkAndRouteDocumentsTaskTrigger" /> </list> </property> </bean> My mvc spring servlet config: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> </bean> <mvc:annotation-driven /> <context:annotation-config /> <context:component-scan base-package="net.tce" /> <import resource="spring-quartz.xml"/> </beans> My problem is that always when startup my application, Quartz creates two jobs at the same time. My job must be execute every 30 seconds: INFO: Starting TASK on Mon Nov 04 15:36:46 CST 2013... INFO: Starting TASK on Mon Nov 04 15:36:46 CST 2013... INFO: Starting TASK on Mon Nov 04 15:37:16 CST 2013... INFO: Starting TASK on Mon Nov 04 15:37:16 CST 2013... INFO: Starting TASK on Mon Nov 04 15:37:46 CST 2013... INFO: Starting TASK on Mon Nov 04 15:37:46 CST 2013... Thanks for your help.

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • ODI 11g – Expert Accelerator for Model Creation

    - by David Allan
    Following on from my post earlier this morning on scripting model and topology creation tonight I thought I’d add a little UI to make those groovy functions a little more palatable. In OWB we have experts for capturing user input, with the groovy console we open up opportunities to build UI around the scripts in a very easy way – even I can do it;-) After a little googling around I found some useful posts on SwingBuilder, the most useful one that I used for the dialog below was this one here. This dialog captures user input for the technology and context for the model and logical schema etc to be created. You can see there are a variety of interesting controls, and its really easy to do. The dialog captures the users input, then when OK is pressed I call the functions from the earlier post to create the logical schema (plus all the other objects) and model. The image below shows what was created, you can see the model (with typo in name), the model is Oracle technology and references the logical schema ORACLE_SCOTT (that I named in dialog above), the logical schema is mapped via the GLOBAL context to the data server ORACLE_SCOTT_DEV (that I named in dialog above), and the physical schema used was just the user name that I connected with – so if you wanted a different user the schema name could be added to the dialog. In a nutshell, one dialog that encapsulates a simpler mechanism for creating a model. You can create your own scripts that use dialogs like this, capture input and process. You can find the groovy script for this is here odi_create_model.groovy, again I wrapped the user capture code in a groovy function and return the result in a variable and then simply call the createLogicalSchema and createModel functions from the previous posting. The script I supplied above has everything you will need. To execute use Tools->Groovy->Open Script and then execute the green play button on the toolbar. Have fun.

    Read the article

  • Spring rejecting bean name, no URL paths specified

    - by richever
    I am trying to register an interceptor using a annotation-driven controller configuration. As far as I can tell, I've done everything correctly but when I try testing the interceptor nothing happens. After looking in the logs I found the following: 2010-04-04 20:06:18,231 DEBUG [main] support.AbstractAutowireCapableBeanFactory (AbstractAutowireCapableBeanFactory.java:452) - Finished creating instance of bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' 2010-04-04 20:06:18,515 DEBUG [main] handler.AbstractDetectingUrlHandlerMapping (AbstractDetectingUrlHandlerMapping.java:86) - Rejected bean name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': no URL paths identified 2010-04-04 20:06:19,109 DEBUG [main] support.AbstractBeanFactory (AbstractBeanFactory.java:241) - Returning cached instance of singleton bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' Look at the second line of this log snippet. Is Spring rejecting the DefaultAnnotationHandlerMapping bean? And if so could this be the problem with my interceptor not working? Here is my application context: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd" default-autowire="byName"> <!-- Configures the @Controller programming model --> <mvc:annotation-driven /> <!-- Scan for annotations... --> <context:component-scan base-package=" com.splash.web.controller, com.splash.web.service, com.splash.web.authentication"/> <bean id="authorizedUserInterceptor" class="com.splash.web.handler.AuthorizedUserInterceptor"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"> <property name="interceptors"> <list> <ref bean="authorizedUserInterceptor"/> </list> </property> </bean> Here is my interceptor: package com.splash.web.handler; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.servlet.handler.HandlerInterceptorAdapter; public class AuthorizedUserInterceptor extends HandlerInterceptorAdapter { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { log.debug(">>> Operation intercepted..."); return true; } } Does anyone see anything wrong with this? What does the error I mentioned above actually mean and could it have any bearing on the interceptor not being called? Thanks!

    Read the article

  • Parsing a .NET DataSet returned from a .NET Web Service in Java

    - by Chris Dail
    I have to consume a .NET hosted web service from a Java application. Interoperability between the two is usually very good. The problem I'm running into is that the .NET application developer chose to expose data using the .NET DataSet object. There are lots of articles written as to why you should not do this and how it makes interoperability difficult: http://www.hanselman.com/blog/ReturningDataSetsFromWebServicesIsTheSpawnOfSatanAndRepresentsAllThatIsTrulyEvilInTheWorld.aspx http://www.lhotka.net/weblog/ThoughtsOnPassingDataSetObjectsViaWebServices.aspx http://aspnet.4guysfromrolla.com/articles/051805-1.aspx http://www.theserverside.net/tt/articles/showarticle.tss?id=Top5WSMistakes My problem is that despite this not being recommended practice, I am stuck with having to consume a web service returning a DataSet with Java. When you generate a proxy for something like this with anything other than .NET you basically end up with an object that looks like this: @XmlElement(namespace = "http://www.w3.org/2001/XMLSchema", required = true) protected Schema schema; @XmlAnyElement(lax = true) protected Object any; This first field is the actual schema that should describe the DataSet. When I process this using JAX-WS and JAXB in Java, it bring all of XS-Schema in as Java objects to be represented here. Walking the object tree of JAXB is possible but not pretty. The any field represents the raw XML for the DataSet that is in the schema specified by the schema. The structure of the dataset is pretty consistent but the data types do change. I need access to the type information and the schema does vary from call to call. I've though of a few options but none seem like 'good' options. Trying to generate Java objects from the schema using JAXB at runtime seems to be a bad idea. This would be way too slow since it would need to happen everytime. Brute force walk the schema tree using the JAXB objects that JAX-WS brought in. Maybe instead of using JAXB to parse the schema it would be easier to deal with it as XML and use XPath to try and find the type information I need. Are there other options I have not considered? Is there a Java library to parse DataSet objects easily? What have other people done who may have similar situations?

    Read the article

  • Getting java.lang.ClassNotFoundException: javax.servlet.ServletContext in junit

    - by coder
    I'm using spring mvc in my application and I'm writing junit test cases for a DAO. But when I run the test, I get an error java.lang.ClassNotFoundException: javax.servlet.ServletContext. In the stacktrace, I see that this error is caused during getApplicationContext. In my applicationContext, I havent defined any servlet. Servlet mapping is done only in web.xml so I dont understand why I'm getting this error. Here is my applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd" xmlns:tx="http://www.springframework.org/schema/tx"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="com.mysql.jdbc.Driver"/> <property name="jdbcUrl" value="jdbc:mysql://localhost:3306/testdb"/> <property name="user" value="username"/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.connection.driver_class">com.mysql.jdbc.Driver</prop> <prop key="hibernate.connection.url">jdbc:mysql://localhost:3306/myWorld_test</prop> <prop key="hibernate.connection.username">username</prop> </props> </property> <property name="packagesToScan"> <list> <value>com.myprojects.pojos</value> </list> </property> </bean> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory" ref="sessionFactory"/> </bean> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <context:component-scan base-package="com.myprojects"/> <context:annotation-config/> <mvc:annotation-driven/> </beans> Here is the stacktrace: java.lang.NoClassDefFoundError: javax/servlet/ServletContext at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2521) at java.lang.Class.getDeclaredMethods(Class.java:1845) at org.springframework.core.type.StandardAnnotationMetadata.hasAnnotatedMethods(StandardAnnotationMetadata.java:161) at org.springframework.context.annotation.ConfigurationClassUtils.isLiteConfigurationCandidate(ConfigurationClassUtils.java:106) at org.springframework.context.annotation.ConfigurationClassUtils.checkConfigurationClassCandidate(ConfigurationClassUtils.java:88) at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:253) at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:223) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:630) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:461) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:120) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:100) at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:248) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64) at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91) at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:122) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:312) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:211) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:288) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:284) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletContext at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 62 more Test class: import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) public class UserServiceTest { @Autowired private UserService service; public UserServiceTest() { } @BeforeClass public static void setUpClass() { } @AfterClass public static void tearDownClass() { } @Before public void setUp() { } @After public void tearDown() { } } Even before writing any test method, I got this error. Any idea why this error?

    Read the article

  • Generate XSD from 2005 SQL Server Database

    - by Robert Finlayson
    What is the easiest method to generate an XSD schema from a 2005 SQL Server Database? Would it be possible to generate one XSD schema for the entire Database (~100 tables)? I searched online and found a SQL example that generates one XSD for the one table: DECLARE @schema xml SET @schema = (SELECT * FROM MyTableName FOR XML AUTO, ELEMENTS, XMLSCHEMA('MyTableNameSchema')) SELECT @schema Outside of SQL Server, is there a third party tool that could generate the XSD file from the 2005 Database?

    Read the article

  • @Transactional in Spring+Hibernate

    - by Arun Kumar
    I an using Spring 3.1 + Hibernate 4.x in my web application. In my DAO, i am saving User type object as following sessionFactory.getCurrentSession().save(user); But getting following exception: org.hibernate.HibernateException: save is not valid without active transaction I googled and found similar question on SO, with following solution: Session session=getSessionFactory().getCurrentSession(); Transaction trans=session.beginTransaction(); session.save(entity); trans.commit(); That solves the problem. But in that solution, there is lot of mess of beginning and committing the transactions manually. Can't i use sessionFactory.getCurrentSession().save(user); directly without begin/commit of transactions manually? I try to use @Transactional on my service/dao methods too, but the problem persists. EDIT : Here is my Hibernate Config File: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.1.xsd"> <!-- enable the configuration of transactional behavior based on annotations --> <tx:annotation-driven transaction-manager="txManager"/> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" p:driverClassName="${db.driverClassName}" p:url="${db.url}" p:username="${db.username}" p:password="${db.password}" /> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="packagesToScan" value="com.myapp.entities" /> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <!--Transaction Manager Added --> <bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory"> <ref bean="sessionFactory" /> </property> </bean> </beans> Please help.

    Read the article

  • SQL Developer Database Diff – Compare Objects From Multiple Schemas

    - by thatjeffsmith
    Ever wonder why Database Diff isn’t called Schema Diff? One reason is because SQL Developer allows you select objects from more than one schema in the ‘Source’ connection for the compare. Simply use the ‘More’ dialog view and select as many tables from as many different schemas as you require Now, before you get around to testing this – as you should never believe what I say, trust but verify – two things you need to know: I’m using SQL Developer version 3.2 On the initial screen you need to use the ‘Maintain’ option Maintain tells SQL Developer to use the schema designation in the source connection to find the same corresponding object in the destination schema. Choose ‘maintain’ if you want to compare objects in the same schema in the destination but don’t have the user login for that schema. So after you’ve selected your databases, your diff preferences, and your objects – you’re ready to perform the compare and review your results. The DIFF Report Notice the highlighted text, SQL Developer is ‘maintaining’ the Schema context from the two databases. Short and sweet. That’s pretty much all there is to doing a compare with SQL Developer with multiple schemas involved. You may have noticed in some posts lately that my editor screenshots had a ‘green screen’ look and feel to them. What’s with the black background in your editors? In the SQL Developer preferences, you can set your editor color schemes. I started with the ‘Twilight’ scheme (team Jacob in case you’re wondering) and then customized it further by going with a default green font color. You could go pretty crazy in here, and I’m assuming 90% of you could care less and will just stick with the original. But for those of you who are particular about your IDE styling – go crazy! SQL Developer Editor Display Preferences

    Read the article

  • Could not find schema information for the element 'castle'.

    - by Richard77
    Hello! I'm creating a Custom tag in my web.config. I first wrote the following entry under the configSections section. <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" /> But, when I try to create a castle node inside the configuration node as below <castle> <components> </components> </castle> I get the following error message:"*Could not find schema information for the element 'castle*'." "*Could not find schema information for the element 'components'***." Am I missing something? I can't find why. And, if I run the application anyway, I get the following error "Could not find section 'Castle' in the configuration file associated with this domain." Ps.// The sample comes from "Pro ASP.NET MVC Framework"/Steven Sanderson/APress ISBN-13 (pbk): 978-1-4302-1007-8" on page 99. Thank you for the help ============================================================ Since I believe to have done exactly what's said in the book and did not succed, I ask the same question in different terms. How do I add a new node using the above information? ============================================================================= Thank you. I did what you said and do not have the two warnings. However, I've go a big new warning: "The element 'configuration' in namespace 'MyWindsorSchema' has invalid child element 'configSections' in namespace 'MyWindsorSchema'. List of possible elements expected: 'include, properties, facilities, components' in namespace 'MyWindsorSchema'."

    Read the article

  • How to create XSD schema from XML with this kind of structure (in .net)?

    - by Mr. Brownstone
    Here's the problem: my input is XML file that looks something like: <BaseEntityClassInfo> <item> <key>BaseEntityClassInfo.SomeField</key> <value>valueData1</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.SomeOtherField</key> <value>valueData2</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.AnotherClassInfo.DisplayedText</key> <value>valueData3</value> </item> ... ... </BaseEntityClassInfo> The <key> element somehow describes entity classes fields and relationships (used in some other app that I don't have access to) and the <value> stores the actual data that I need. My goal is to programatically generate a typed Dataset from this XML that could then be used for creating reports. I thought of building some XSD schema from input XML file first and then use this schema to generate Dataset but I'm not sure how to do that. The problem is that I don't want all data in one table, I need several tables with relationships based on the <key> value so I guess I need to infer relational structure from XML <key> data in some way. So what do you think? How could this be done and what would be the best approach? Any advice, ideas, suggestions would be appreciated!

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Mono-LibreOffice System.TypeLoadException

    - by Marco
    In the past I wrote a C# library to work with OpenOffice and this worked fine both in Windows than under Ubuntu with Mono. Part of this library is published here as accepted answer. In these days I discovered that Ubuntu decided to move to LibreOffice, so I tried my library with LibreOffice latest stable release. While under Windows it's working perfectly, under Linux I receive this error: Unhandled Exception: System.TypeLoadException: A type load exception has occurred. [ERROR] FATAL UNHANDLED EXCEPTION: System.TypeLoadException: A type load exception has occurred. Usually Mono tells us which library can't load, so I can install correct package and everything is OK, but in this case I really don't know what's going bad. I'm using Ubuntu oneiric and my library is compiled with Framework 4.0. Under Windows I had to write this into app.config: <?xml version="1.0"?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0,Profile=Client"/> </startup> </configuration> because LibreOffice assemblies uses Framework 2.0 (I think). How can I find the reason of this error to solve it? Thanks UPDATE: Even compiling with Framework 2.0 problem (as expected) is the same. Problem (I think) is that Mono is not finding cli-uno-bridge package (installable on previous Ubuntu releases and now marked as superseded), but I cannot be sure. UPDATE 2: I created a test console application referencing cli-uno dlls on Windows (they are registered in GAC_32 and GAC_MSIL). CONSOLE app static void Main(string[] args) { Console.WriteLine("Starting"); string dir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); string doc = Path.Combine(dir, "Liberatoria siti web.docx"); using (QOpenOffice.OpenOffice oo = new QOpenOffice.OpenOffice()) { if (!oo.Init()) return; oo.Load(doc, true); oo.ExportToPdf(Path.ChangeExtension(doc, ".pdf")); } } LIBRARY: using unoidl.com.sun.star.lang; using unoidl.com.sun.star.uno; using unoidl.com.sun.star.container; using unoidl.com.sun.star.frame; using unoidl.com.sun.star.beans; using unoidl.com.sun.star.view; using unoidl.com.sun.star.document; using System.Collections.Generic; using System.IO; using System; namespace QOpenOffice { class OpenOffice : IDisposable { private XComponentContext context; private XMultiServiceFactory service; private XComponentLoader component; private XComponent doc; public bool Init() { Console.WriteLine("Entering Init()"); try { context = uno.util.Bootstrap.bootstrap(); service = (XMultiServiceFactory)context.getServiceManager(); component = (XComponentLoader)service.createInstance("com.sun.star.frame.Desktop"); XNameContainer filters = (XNameContainer)service.createInstance("com.sun.star.document.FilterFactory"); return true; } catch (System.Exception ex) { Console.WriteLine(ex.Message); if (ex.InnerException != null) Console.WriteLine(ex.InnerException.Message); return false; } } } } but I'm not able to see "Starting" !!! If I comment using(...) on application, I see line on console... so I think it's something wrong in DLL. There I'm not able to see "Entering Init()" message on Init(). Behaviour is the same when LibreOffice is not installed and when it is !!! try..catch block is not executed...

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to set up default schema name in JPA configuration?

    - by Roman
    I found that in hibernate config file we could set up parameter hibernate.default_schema: <hibernate-configuration> <session-factory> ... <property name="hibernate.default_schema">myschema</property> ... </session-factory> </hibernate-configuration> Now I'm using JPA and I want to do the same. Otherwise I have to add parameter schema to each @Table annotation like: @Entity @Table (name = "projectcategory", schema = "SCHEMANAME") public class Category implements Serializable { ... } As I understand this parameter should be somewhere in this part of configuration: <bean id="domainEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="JiraManager"/> <property name="dataSource" ref="domainDataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="generateDdl" value="false"/> <property name="showSql" value="false"/> <property name="databasePlatform" value="${hibernate.dialect}"/> </bean> </property> </bean> <bean id="domainDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${db.driver}" /> <property name="jdbcUrl" value="${datasource.url}" /> <property name="user" value="${datasource.username}" /> <property name="password" value="${datasource.password}" /> <property name="initialPoolSize" value="5"/> <property name="minPoolSize" value="5"/> <property name="maxPoolSize" value="15"/> <property name="checkoutTimeout" value="10000"/> <property name="maxStatements" value="150"/> <property name="testConnectionOnCheckin" value="true"/> <property name="idleConnectionTestPeriod" value="50"/> </bean> ... but I can't find its name in google. Any ideas?

    Read the article

  • WSDLException : An error occurred trying to resolve schema referenced at ...

    - by Stefano
    Hello i'm trying to generate a proxy class from a local wsdl file with eclipse Galileo and axis 2 1.4 on windows xp . My problem is that i get an error due to an imported schema inside the wsdl . The line tha troubles me is : <xsd:import namespace="http://www.w3.org/2005/05/xmlmime" schemaLocation="http://www.w3.org/2005/05/xmlmime"/> i've tried to run the wsdl2java following command: wsdl2java.bat -uri SOAService.wsdl -o D:\temp p test -d xmlbeans -a -s -ns2p -uw and i get the following exception : Exception in thread "main" org.apache.axis2.wsdl.codegen.CodeGenerationException : Error parsing WSDL at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.(CodeGenerat ionEngine.java:156) at org.apache.axis2.wsdl.WSDL2Code.main(WSDL2Code.java:35) at org.apache.axis2.wsdl.WSDL2Java.main(WSDL2Java.java:24) Caused by: javax.wsdl.WSDLException: WSDLException (at /wsdl:definitions/wsdl:ty pes/xsd:schema): faultCode=OTHER_ERROR: An error occurred trying to resolve sche ma referenced at 'http://www.w3.org/2005/05/xmlmime', relative to 'file:/D:/Prog rammi/axis2-1.4/bin/SOAService.wsdl'.: java.net.ConnectException: Connection tim ed out: connect at com.ibm.wsdl.xml.WSDLReaderImpl.parseSchema(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseSchema(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseTypes(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseDefinitions(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.readInTheWSDLFile( CodeGenerationEngine.java:288) at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.(CodeGenerat ionEngine.java:111) ... 2 more Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:520) at java.net.Socket.connect(Socket.java:470) at sun.net.NetworkClient.doConnect(NetworkClient.java:157) at sun.net.www.http.HttpClient.openServer(HttpClient.java:388) at sun.net.www.http.HttpClient.openServer(HttpClient.java:523) at sun.net.www.http.HttpClient.(HttpClient.java:231) at sun.net.www.http.HttpClient.New(HttpClient.java:304) at sun.net.www.http.HttpClient.New(HttpClient.java:321) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:813) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:765) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:690) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:934) at java.net.URL.openStream(URL.java:1007) at com.ibm.wsdl.util.StringUtils.getContentAsInputStream(Unknown Source) i suspect it's due to the system proxy which doesn't let retrieve the xsd to the wsdl2java tool. In fact i can download the file from the browser without problems. There's an option to specify a proxy to wsdl2java or someone has resolved this issue ? For the moment i've downloaded the xsd, added it to the project and changed the wsdl to include the relative file (instead of the remote one) , but i'd prefer to avoid this , because the file is a third party service wsdl. thank you in advance for any hint Stefano

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >