Search Results

Search found 3163 results on 127 pages for 'schema'.

Page 16/127 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • Perl, LibXML and Schemas

    - by Xetius
    I have an example Perl script which I am trying to load and validate a file against a schema, them interrogate various nodes. #!/usr/bin/env perl use strict; use warnings; use XML::LibXML; my $filename = 'source.xml'; my $xml_schema = XML::LibXML::Schema->new(location=>'library.xsd'); my $parser = XML::LibXML->new (); my $doc = $parser->parse_file ($filename); eval { $xml_schema->validate ($doc); }; if ($@) { print "File failed validation: $@" if $@; } eval { print "Here\n"; foreach my $book ($doc->findnodes('/library/book')) { my $title = $book->findnodes('./title'); print $title->to_literal(), "\n"; } }; if ($@) { print "Problem parsing data : $@\n"; } Unfortunately, although it is validating the XML file fine, it is not finding any $book items and therefore not printing out anything. If I remove the schema from the XML file and the validation from the PL file then it works fine. I am using the default namespace. If I change it to not use the default namespace (xmlns:lib="http://libs.domain.com" and prefix all items in the XML file with lib and change the XPath expressions to include the namespace prefix (/lib:library/lib:book) then it again works file. Why? and what am I missing? XML: <?xml version="1.0" encoding="utf-8"?> <library xmlns="http://lib.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lib.domain.com .\library.xsd"> <book> <title>Perl Best Practices</title> <author>Damian Conway</author> <isbn>0596001738</isbn> <pages>542</pages> <image src="http://www.oreilly.com/catalog/covers/perlbp.s.gif" width="145" height="190"/> </book> <book> <title>Perl Cookbook, Second Edition</title> <author>Tom Christiansen</author> <author>Nathan Torkington</author> <isbn>0596003137</isbn> <pages>964</pages> <image src="http://www.oreilly.com/catalog/covers/perlckbk2.s.gif" width="145" height="190"/> </book> <book> <title>Guitar for Dummies</title> <author>Mark Phillips</author> <author>John Chappell</author> <isbn>076455106X</isbn> <pages>392</pages> <image src="http://media.wiley.com/product_data/coverImage/6X/07645510/076455106X.jpg" width="100" height="125"/> </book> </library> XSD: <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://lib.domain.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://lib.domain.com"> <xs:attributeGroup name="imagegroup"> <xs:attribute name="src" type="xs:string"/> <xs:attribute name="width" type="xs:integer"/> <xs:attribute name="height" type="xs:integer"/> </xs:attributeGroup> <xs:element name="library"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="book"> <xs:complexType> <xs:sequence> <xs:element name="title" type="xs:string"/> <xs:element maxOccurs="unbounded" name="author" type="xs:string"/> <xs:element name="isbn" type="xs:string"/> <xs:element name="pages" type="xs:integer"/> <xs:element name="image"> <xs:complexType> <xs:attributeGroup ref="imagegroup"/> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Oracle Internet Directory - Required Schemas already loaded

    - by Brandon Kreisel
    After a successful installation of Oracle Internet Directories (OID) I attemped to remove and re-install it with a different configuration. First I ran ./runInstaller.sh -deinstall from the OID middleware directory to uninstall. Then i ran the installer again but it complained on the Specify Schema Database step. Upon connecting to the database, the installer threw: INST-5174: Required schemas are already loaded in the specified database. I connected to the target database and removed the OID schemas I knew of SQL> drop user ODS cascade; SQL> drop user ODSSM cascade; That didn't not work and the error still appears. What steps am I missing? Note: The Database is 11g and it was brand new before installing OID so there is no other data select * from all_users doesn't show any other schemas related to OID from what I can tell, the latest user creation date is OCT 2010

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Is there a "rigorous" method for choosing a database?

    - by Andrew Martin
    I'm not experienced with NoSQL, but one person on my team is calling for its use. I believe our data and its usage isn't optimal for a NoSQL implementation. However, my understanding is based off reading various threads on various websties. I'd like to get some stronger evidence as to who's correct. My question is therefore, "Is there a technique for estimating the performance and requirements of a certain database, that I could use to confirm or modify my intuitions?". Is there, for example, a good book for calculating the performance of equivalent MongoDB/MySQL schema? Is the only really reliable option to build the whole thing and take metrics?

    Read the article

  • Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available.

    - by user702295
    Hello!  Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available. This is an updated re-release of the generic Partial Schema Exporter Tool.  The generic patch is for 7.3.1.x and 12.2.x. TABLE_REORG was introduced in 7.3.1.3 12.2.0.  Therefore for 7.3.1.x the schema must be at 7.3.1.3 or above. This is build 3 of the patch. It contains fixes for the following bugs - BUG 17495971 - DEMANTRA 12.2 - CUMULATIVE HISTORY NOT CORRECT   It now only uses DATA_PUMP COMPRESSION only on Enterprise Edition for 11g and and up. - Bug 17452153 - 1OFF:16086475:TRYING TO FILTER DROP DOWN IN A METHOD CALL USING MORE THAN 1 ATTR   It now builds GL level filters with and without the GL id column where applicable. These bugs are also fixed in 7.3.1.6 and 12.2.3.

    Read the article

  • How to change password schema for Dovecot user authentication for an already existing mail server

    - by deb_lrnr
    Hello, I have an email server setup on Debian Lenny with Postfix, Dovecot, SASL and MySQL. Currently, the password scheme in my dovecot-sql.conf file is set to: CRYPT default_pass_scheme = CRYPT I would like to globally change the scheme to something stronger like SSHA, or MD5-CRYPT and re-hash all passwords with SSHA. What is the best way to do this? The Dovecot wiki mentions how passwords that don't follow the default scheme defined in dovecot-sql.conf can be prefixed with "{ssha}password", but I couldn't see anything regarding changing an already-existing scheme to a new one for all passwords that are already in the database. Thanks for your help!

    Read the article

  • How to specify ingredients in Schema.org (other than recipes or drugs)?

    - by guillem
    I'd like to use Schema.org for a series of product specs pages. The products are cosmetics (shampoos, creams, etc.) and ideally an INCI declaration will be included for every product. I'd like to use Schema.org in the declaration, but I cannot find a suitable category in Schema.org. Ingredients is meant for cooking recipes, and ActiveIngredient for (medical) drugs and dietary supplements. Did I overlook some part of the tree? Any ideas for a workaround? Or am I trying to do something out of scope?

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

  • The Schema returned by the new query differ from the base query

    - by sochandara
    I am working on class project which required to work with Windows Application and this issue occurred to me that i don understand how to solved it can anybody help please? I want to show the NATIONALITYNAME instead of showing NATIONALITYID in the grid view SELECT COACH.COACHID, COACH.COACHFIRSTNAME, COACH.COACHLASTNAME, NATIONALITY.NATIONALITY FROM COACH INNER JOIN NATIONALITY ON COACH.NATIONALITYID = NATIONALITY.NATIONALITYID Error Message: "The Schema returned by the new query differ from the base query"![alt text][1]

    Read the article

  • Where is the handy designer for setting Permissions and schema diagram designer in a SQL2005 Databas

    - by BlackMael
    I have just installed the GDR RTM version of Visual Studio Team System Database Edition GDR RTM. It all seems to work wonderfully, but I seem to have to edit XML (Database.sqlpermissions) for specify SQL Permissions. Am I missing something? For that matter where is the schema diagram tool? I understand GDR exposes alot for extending the Database Edition components, so am I supposed to wait for third party extensions to provide the diagram tool and permissions designer?

    Read the article

  • Sharepoint Document Library Schema.xml Customization

    - by Srikrishna Sallam
    Hi I am trying to add a custom field to the Schema.xml of Document library in sharepoint here is the code that I took from a blog In the ID i have to put the guid to do so do I have to add my own guid or do i have to query the sharepoint database and find the guid and paste it there... If i have to get it from sharepoint database which database and in what table I will find this information.... any help will be greatly appreciated Thanks, srikrishna.

    Read the article

  • 'The default schema does not exist' on deploy of SQL CLR assembly onto SQL Server 2008

    - by abatishchev
    I'm deploying an example SQL CLR stored procedure which has a SQL CLR type as parameter using Visual Studio 2008 and menu Project -> Deploy. public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void TakeTariff(TariffInfo tariffInfo) { } } public class TariffInfo { public SqlDecimal Amount { get; private set; } } but getting next strange error: The default schema does not exist. How can I fix that? My user was created this way: CREATE USER myUser FOR LOGIN myLogin_mod WITH DEFAULT_SCHEMA = mySchema

    Read the article

  • Biztalk Flat File Schema - how to accept a LF or CRLF as the line delimiter

    - by FullOfQuestions
    Hi, Our client sends us a flat file as input, which we then take and convert to an XML file before sending to the destination system. The flat file consists of multiple lines, each line is delimited by LF or CRLF. How do I create a Flat File Schema so that Biztalk can interpret each line of data regardless of whether the line was delimited by LF (0x0A) or CRLF (0x0D 0x0A)? Thank you in advance, M

    Read the article

  • Dynamically loading database schema information in .NET

    - by Nigel
    I am building a .NET application that given a connection string, at run time, needs to be able to retrieve information from the corresponding database schema, such as available columns, datatypes, and whether they are nullable. What is the best way to accomplish this? Has anyone done anything like this before? Many thanks, Nigel.

    Read the article

  • Database schema for multiple category/product relationship

    - by sree01
    I want to design a database for an e-commerce application with category/subcategory management. Please suggest a database schema where we can create categories and subcategories and add products to those categories. Each product can have multiple categories and we can select products belong to multiple categories using a boolean database query Thanks

    Read the article

  • How to visualize an XML schema ?

    - by matt
    I have made an XML Schema - all the code basically - and was wondering if there is a way that the code can generate something like this: http://www.novell.com/documentation/extend52/Docs/help/Director/books/PGImages/novell_portlet_xml_schema.gif If so how can I do it?

    Read the article

  • Castle ActiveRecord - schema generation without enforcing referential integrity?

    - by Simon
    Hi all, I've just started playing with Castle active record as it seems like a gentle way into NHibernate. I really like the idea of the database schema being generate from my classes during development. I want to do something similar to the following: [ActiveRecord] public class Camera : ActiveRecordBase<Camera> { [PrimaryKey] public int CameraId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class Tripod : ActiveRecordBase<Tripod> { [PrimaryKey] public int TripodId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class CameraKit : ActiveRecordBase<CameraKit> { [PrimaryKey] public int CamKitId {get; set;} [Property] public string description {get; set;} [HasMany(Inverse=true, Table="Cameras", ColumnKey="CamKitId")] public IList<Camera> Cameras {get; set;} [HasMany(Inverse=true, Table="Tripods", ColumnKey="CamKitId")] public IList<Camera> Tripods {get; set;} } A camerakit should contain any number of tripods and cameras. Camera kits exist independently of cameras and tripods, but are sometimes related. The problem is, if I use createschema, this will put foreign key constraints on the Camera and Tripod tables. I don't want this, I want to be able to to set CamKitId to null on the tripod and camera tables to indicate that it is not part of a CameraKit. Is there a way to tell activerecord/nhibernate to still see it as related, without enforcing the integrity? I was thinking I could have a cameraKit record in there to indicate "no camera kit", but it seems like oeverkill. Or is my schema wrong? Am I doing something I shouldn't with an ORM? (I've not really used ORMs much) Thanks!

    Read the article

  • Deploy only cube schema, without processing

    - by Ken Yao
    Is there a way to only deploy cube schema, but without processing the cube. It seems in Visual Studio, when yo deploy a cube, by default, it is "Deploy and Process". The problem is processing takes so much time, and my main purpose is just writing some MDX script and see if it works well against the cube structure. It seems processing whole cube is just over kill. So I ask.

    Read the article

  • Using NDbUnit with tables that have a table schema

    - by KevinT
    I am having issues using NDbUnit with tables that have their own schema - ie: CREATE TABLE MYSCHEMA.MyTable01 ( Id int NOT NULL, Description varchar(50) NOT NULL ) Is this a supported scenario? What do I need to do to get this to work? (working fine when the table is dbo.MyTable01)

    Read the article

  • Webservice description - xsd schema and xml generation

    - by dominolog
    Hello I want to design a WebService for communication between Win Mobile 6.5 based Palm device and a PHP web server. The Palm device software will be developed on .NET compact framework. The communication between web server and device I envision as XML over HTTP. I want a tool that will let me design the Webservice (interfaces, methods) and then generate the xsd schema together with xml messages for the communication. Can you recommend something? Regards

    Read the article

  • friendship database schema

    - by Daniel Hertz
    I'm creating a db schema that involves users that can be friends, and I was wondering what the best way to model the ability for these friends to have friendships. Should it be its own table that simply has two columns that each represent a user? Thanks!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >