Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 23/258 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How to add XML elements into the toolbox in Visual Studio 2010 RC

    - by Alex Marshall
    I'm trying to edit an XML schema in Visual Studio 2010 Ultimate RC, but when I go to the toolbox (with the schema open and focused) there's absolutely nothing in the Toolbox view, even when every tutorial out there that I've read tells me that there should be. I've tried using the context menu option for resetting the Toolbox to no effect. Is there something I'm missing ? Something I need to install to get this feature of Visual Studio going ?

    Read the article

  • How to formulate a SQL Server indexed view that aggregates distinct values?

    - by Jeremy Lew
    I have a schema that includes tables like the following (pseudo schema): TABLE ItemCollection { ItemCollectionId ...etc... } TABLE Item { ItemId, ItemCollectionId, ContributorId } I need to aggregate the number of distinct contributors per ItemCollectionId. This is possible with a query like: SELECT ItemCollectionId, COUNT(DISTINCT ContributorId) FROM Item GROUP BY ItemCollectionId I further want to pre-calculate this aggregation using an indexed (materialized) view. The DISTINCT prevents an index being placed on this view. Is there any way to reformulate this which will not violate SQL Server's indexed view constraints?

    Read the article

  • AtomicSwap instead of AtomicCompareAndSwap ?

    - by anon
    I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic_compare_and_swap ... and I just want to do the swap, without comparing]. Thanks!

    Read the article

  • comparing indexPaths in a loop iphone

    - by Brodie4598
    I am trying to compare an index path in my didSelectRowAtIndexPath delegate method with an array of index paths. for (n=0; n < [tempMutArrray count]; n= n+1){ NSComparisonResult *result = [indexPath compare:[tempMutArray objectAtIndex:n]; //What I want to do is is write an if statement that executes a certain block of code //if the two index paths are equal, but I cant figure out how to work with an //NSComparisonResult. }

    Read the article

  • who convert flat file xml in biztalk server?

    - by Anish
    Hi, I am new to BizTalk server. So if this question is so obvious, plz forgive :( I have a flat file from one application, which i have to send to a biztalk server. In that case which component in my biztalk server converts my flat file to xml. Also I heard that I have to create an input schema(.xsd file), why i need an input message schema? Thanks in advance!

    Read the article

  • SQL qn:- comparing data in rows

    - by rayhan
    hi, i would like to compare numeric data in rows. for eg, i have a table that has a column as such:- Number ====== 1.88 9.99 8.76 9.88 I want to compare 2nd value, 3rd value, 4th value to the 1st value. And then 3rd, 4th value to the 2nd. then 4th to 3rd. How can i construct an sql to do this?

    Read the article

  • FOSS version of SQLCompare or something similar?

    - by Scott
    Actually, free is good enough, it doesn't have to be open source :) I'm currently using the Schema Compare utility of VS2008, but it doesn't have a command line interface and has some other weaknesses as well. I'm wondering what free tools others are using to provide command line schema comparisons/synchronizations? Thanks.

    Read the article

  • Compare base class part of sub class instance to another base class instance

    - by Anders Abel
    I have number of DTO classes in a system. They are organized in an inheritance hierarchy. class Person { public int Id { get; set; } public string FirstName { get; set; } public string ListName { get; set; } } class PersonDetailed : Person { public string WorkPhone { get; set; } public string HomePhone { get; set; } public byte[] Image { get; set; } } The reason for splitting it up is to be able to get a list of people for e.g. search results, without having to drag the heavy image and phone numbers along. Then the full PersonDetail DTO is loaded when the details for one person is selected. The problem I have run into is comparing these when writing unit tests. Assume I have Person p1 = myService.GetAPerson(); PersonDetailed p2 = myService.GetAPersonDetailed(); // How do I compare the base class part of p2 to p1? Assert.AreEqual(p1, p2); The Assert above will fail, as p1 and p2 are different classes. Is it possible to somehow only compare the base class part of p2 to p1? Should I implement IEquatable<> on Person? Other suggestions?

    Read the article

  • validating wsdl/schema using cxf

    - by SGB
    I am having a hard time getting cxf to validate an xml request that my service creates for a 3rd party. My project uses maven. Here is my project structure Main Module : + Sub-Module1 = Application + sub-Module2 = Interfaces In Interfaces, inside src/main/resources I have my wsdl and xsd. so, src/main/resources + mywsdl.wsdl. + myschema.xsd The interface submodule is listed as a dependency in the Application-sub-module. inside Application sub-module, there is a cxsf file in src/maim/resources. <jaxws:client name="{myTargerNameSpaceName}port" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:client> AND:. <jaxws:endpoint name="{myTargetNameSpaceName}port" wsdlLocation="/mywsdl.wsdl" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:endpoint> I tried changing the "name="{myTargetNameSpaceName}port" to "name="{myEndPointName}port" But to no anvil. My application works. But it just do not validate the xml I am producing that has to be consumed by a 3rd party application. I would like to get the validation working, so that any request that I send would be a valid one. Any suggestions?

    Read the article

  • Compare only date javascript

    - by moleculezz
    Hi all, I can't seem to figure out what is wrong with my code. Maybe it would be simpler to just compare date and not time. Not sure how to do this either and I searched but couldn't find my exact problem. BTW, when I display the two dates in an alert, they show as exactly the same. My code: window.addEvent('domready', function() { var now = new Date(); var input = $('datum').getValue(); var dateArray = input.split('/'); var userMonth = parseInt(dateArray[1])-1; var userDate = new Date(); userDate.setFullYear(dateArray[2], userMonth, dateArray[0], now.getHours(), now.getMinutes(), now.getSeconds(), now.getMilliseconds()); if(userDate > now) { alert(now+'\n'+userDate); } }); Perhaps there is a simpler way to compare dates and not including the time. Hope someone has an answer... Thanks!

    Read the article

  • how to get Contact database schema.

    - by kamiomar
    Dear, is there any link that provide me the Contact Schema. when i store new phone number in mobile, the informaion store in the database. so i need schema to create my own table for back purpose. i have alreay get People table column by the follwoing code. boolean displayFlag = false; String str = ""; Uri CONTACT_URI = People.CONTENT_URI; Cursor cursor = mContext.getContentResolver().query(CONTACT_URI, null, null, null, null); String columnNames = ""; if (cursor != null) { try { cursor.getCount(); if (cursor.moveToFirst()) { String[] columns = cursor.getColumnNames(); for (int i = 0; i < columns.length; i++) { columnNames += "colName" + cursor.getColumnName(i) + " : " + cursor.getString(i) + "colValue"; } } } finally { cursor.close(); } } createImage(columnNames); if (displayFlag) { Toast.makeText(mContext, str, Toast.LENGTH_SHORT).show(); } } Thanks

    Read the article

  • how to design a schema where the columns of a table are not fixed

    - by hIpPy
    I am trying to design a schema where the columns of a table are not fixed. Ex: I have an Employee table where the columns of the table are not fixed and vary (attributes of Employee are not fixed and vary). Nullable columns in the Employee table itself i.e. no normalization Instead of adding nullable columns, separate those columns out in their individual tables ex: if Address is a column to be added then create table Address[EmployeeId, AddressValue]. Create tables ExtensionColumnName [EmployeeId, ColumnName] and ExtensionColumnValue [EmployeeId, ColumnValue]. ExtensionColumnName would have ColumnName as "Address" and ExtensionColumnValue would have ColumnValue as address value. Employee table EmployeeId Name ExtensionColumnName table ColumnNameId EmployeeId ColumnName ExtensionColumnValue table EmployeeId ColumnNameId ColumnValue There is a drawback is the first two ways as the schema changes with every new attribute. Note that adding a new attribute is frequent. I am not sure if this is the good or bad design. If someone had a similar decision to make, please give an insight on things like foreign keys / data integrity, indexing, performance, reporting etc.

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • Database schema for a library

    - by ABach
    Hi all - I'm designing a library management system for a department at my university and I wanted to enlist your eyes on my proposed schema. This post is primarily concerned with how we store multiple copies of each book; something about what I've designed rubs me the wrong way, and I'm hoping you all can point out better ways to tackle things. For dealing with users checking out books, I've devised three tables: book, customer, and book_copy. The relationships between these tables are as follows: Every book has many book_copies (to avoid duplicating the book's information while storing the fact that we have multiple copies of that book). Every user has many book_copies (the other end of the relationship) The tables themselves are designed like this: ------------------------------------------------ book ------------------------------------------------ + id + title + author + isbn + etc. ------------------------------------------------ ------------------------------------------------ customer ------------------------------------------------ + id + first_name + first_name + email + address + city + state + zip + etc. ------------------------------------------------ ------------------------------------------------ book_copy ------------------------------------------------ + id + book_id (FK to book) + customer_id (FK to customer) + checked_out + due_date + etc. ------------------------------------------------ Something about this seems incorrect (or at least inefficient to me) - the perfectionist in me feels like I'm not normalizing this data correctly. What say ye? Is there a better, more effective way to design this schema? Thanks!

    Read the article

  • Facebook style messaging system schema design

    - by Jamie
    Hi all, I'm looking to implement a facebook style messaging system (thread messages) into a site of mine. Do you think this schema markup looks okay? Doctrine schema.yml: UserMessage: tableName: user_message actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } sender_id : { type: integer(10), notnull: true } sender_read: { type: boolean, default: 1 } subject: { type: string(255), notnull: true } message: { type: string(1000), notnull: true } hash: { type: string(32), notnull: true } relations: UserMessageRecipient as Recipient: type: many local: id foreign: message_id UserMessageReply as Reply: type: many local: id foreign: message_id UserMessageReply: tableName: user_message_reply columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } message: { type: string(1000), notnull: true } sender_id: { type: integer(10), notnull: true } relations: UserMessage as Message: local: message_id foreign: id type: one UserMessageRecipient: tableName: user_message_recipient actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } recipient_id: { type: integer(10), notnull: true } recipient_read: { type: boolean, default: 0 } When I a new reply is made,i'll make sure the boolean for "recipient_read" for each recipient is set to false and of course i'll make sure sender_read is set to false too. I'm using a hash for the URL: http://example.com/user/messages/aadeb18f8bdaea49882ec4d2a8a3c062 (As the id will be starting from 1, i don't wish to have http://example.com/user/messages/1. Yeah, I could start incrementing from a bigger number, but i'd prefer to start at 1.) Is this a good way to go about it? Your thoughts and suggestions would be hugely appreciated. Thanks guys!

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • How to compare two structure strings in C++

    - by Arvandor
    Ok, so this week in class we're working with arrays. I've got an assignment that wanted me to create a structure for an employee containing an employee ID, first name, last name, and wages. Then it has me ask users for input for 5 different employees all stored in an array of this structure, then ask them for a search field type, then a search value. Lastly, display all the information for all positive search results. I'm still new, so I'm sure it isn't a terribly elegant program, but what I'm trying to do now is figure out how to compare a user entered string with the string stored in the structure... I'll try to give all the pertinent code below. struct employee { int empid, string firstname, string lastname, float wage }; employee emparray[] = {}; employee value[] = {}; //Code for populating emparray and structure, then determine search field etc. cout << "Enter a search value: "; cin >> value.lastname; for(i = 0; i < 5; i++) { if(strcmp(value.lastname.c_str, emparray[i].lastname.c_str) == 0) { output(); } } Which... I thought would work, but it's giving me the following error.. Error 1 error C3867: 'std::basic_string<_Elem,_Traits,_Alloc>::c_str': function call missing argument list; use '&std::basic_string<_Elem,_Traits,_Alloc>::c_str' to create a pointer to member d:\myfile Any thoughts on what's going on? Is there a way to compare two .name notated strings without totally revamping the program? IF you want to drill me on best practices, please feel free, but also please try to solve my particular problem.

    Read the article

  • How to compare date from database using C#?

    - by user1490374
    I would like to compare the date selected from the database (every entry in EndDate) and compare them with today date. Is there any way to do this programmatically? Like extracting the dates and comparing them individually? I need this because I need to update the status for the table. string username; username = HttpContext.Current.User.Identity.Name; string date = DateTime.Now.ToString("MM/dd/yyyy"); txtDate.Text = date; SqlConnection conn1 = new SqlConnection("Data Source=mydatasource\\sqlexpress;" + "Initial Catalog = Suite2; Integrated Security =SSPI"); SqlDataAdapter adapter; string end; end = "SELECT EndDate FROM Table_Message WHERE username = '" + username + "'"; adapter = new SqlDataAdapter(end, conn1); conn1.Open(); DataSet ds = new DataSet(); adapter.Fill(ds); //Execute the sql command GridView2.DataSource = ds; GridView2.DataBind(); conn1.Close();

    Read the article

  • Perl, LibXML and Schemas

    - by Xetius
    I have an example Perl script which I am trying to load and validate a file against a schema, them interrogate various nodes. #!/usr/bin/env perl use strict; use warnings; use XML::LibXML; my $filename = 'source.xml'; my $xml_schema = XML::LibXML::Schema->new(location=>'library.xsd'); my $parser = XML::LibXML->new (); my $doc = $parser->parse_file ($filename); eval { $xml_schema->validate ($doc); }; if ($@) { print "File failed validation: $@" if $@; } eval { print "Here\n"; foreach my $book ($doc->findnodes('/library/book')) { my $title = $book->findnodes('./title'); print $title->to_literal(), "\n"; } }; if ($@) { print "Problem parsing data : $@\n"; } Unfortunately, although it is validating the XML file fine, it is not finding any $book items and therefore not printing out anything. If I remove the schema from the XML file and the validation from the PL file then it works fine. I am using the default namespace. If I change it to not use the default namespace (xmlns:lib="http://libs.domain.com" and prefix all items in the XML file with lib and change the XPath expressions to include the namespace prefix (/lib:library/lib:book) then it again works file. Why? and what am I missing? XML: <?xml version="1.0" encoding="utf-8"?> <library xmlns="http://lib.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lib.domain.com .\library.xsd"> <book> <title>Perl Best Practices</title> <author>Damian Conway</author> <isbn>0596001738</isbn> <pages>542</pages> <image src="http://www.oreilly.com/catalog/covers/perlbp.s.gif" width="145" height="190"/> </book> <book> <title>Perl Cookbook, Second Edition</title> <author>Tom Christiansen</author> <author>Nathan Torkington</author> <isbn>0596003137</isbn> <pages>964</pages> <image src="http://www.oreilly.com/catalog/covers/perlckbk2.s.gif" width="145" height="190"/> </book> <book> <title>Guitar for Dummies</title> <author>Mark Phillips</author> <author>John Chappell</author> <isbn>076455106X</isbn> <pages>392</pages> <image src="http://media.wiley.com/product_data/coverImage/6X/07645510/076455106X.jpg" width="100" height="125"/> </book> </library> XSD: <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://lib.domain.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://lib.domain.com"> <xs:attributeGroup name="imagegroup"> <xs:attribute name="src" type="xs:string"/> <xs:attribute name="width" type="xs:integer"/> <xs:attribute name="height" type="xs:integer"/> </xs:attributeGroup> <xs:element name="library"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="book"> <xs:complexType> <xs:sequence> <xs:element name="title" type="xs:string"/> <xs:element maxOccurs="unbounded" name="author" type="xs:string"/> <xs:element name="isbn" type="xs:string"/> <xs:element name="pages" type="xs:integer"/> <xs:element name="image"> <xs:complexType> <xs:attributeGroup ref="imagegroup"/> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Oracle Internet Directory - Required Schemas already loaded

    - by Brandon Kreisel
    After a successful installation of Oracle Internet Directories (OID) I attemped to remove and re-install it with a different configuration. First I ran ./runInstaller.sh -deinstall from the OID middleware directory to uninstall. Then i ran the installer again but it complained on the Specify Schema Database step. Upon connecting to the database, the installer threw: INST-5174: Required schemas are already loaded in the specified database. I connected to the target database and removed the OID schemas I knew of SQL> drop user ODS cascade; SQL> drop user ODSSM cascade; That didn't not work and the error still appears. What steps am I missing? Note: The Database is 11g and it was brand new before installing OID so there is no other data select * from all_users doesn't show any other schemas related to OID from what I can tell, the latest user creation date is OCT 2010

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How can I compare two directories to compare missing files, when the directories don't have the same structure?

    - by David Dean
    I've been sent a HDD of new and updated files from an organisation that we are working with, but we already have most of the files sitting on our servers, and would like to update our local versions to match theirs. Normally, this would be a job for something like rsync, but our problem is that the directory structure they provide is very poorly organised and we've had to rearrange their files in the past to work best with our systems. So, my question is: How can I find out which files in the set they have provided are new or different to the versions that we have, when the directory structures are different? Once that question is answered, we can update the changed files, and work out where to put the new files on our system, probably somewhat manually.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >