Search Results

Search found 675 results on 27 pages for 'pk'.

Page 8/27 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Mapping One-to-One subclass in Fluent NHibernate

    - by Mike C.
    I have the following database structure: Event table Id - Guid (PK) Name - NVarChar Description - NVarChar SpecialEvent table Id - Guid (PK) StartDate - DateTime EndDate - DateTime I have an abstract Event class, and a SpecialEvent class that inherits from it. Eventually I will have a RecurringEvent class which will inherit from the Event class also. I'd like to map the SpecialEvent class while preserving a one-to-one relationship mapped with the Ids, if possible. Can anybody point me in the correct direction? Thanks!

    Read the article

  • Complex query with two tables and multilpe data and price ranges

    - by TiuTalk
    Let's suppose that I have these tables: [ properties ] id (INT, PK) name (VARCHAR) [ properties_prices ] id (INT, PK) property_id (INT, FK) date_begin (DATE) date_end (DATE) price_per_day (DECIMAL) price_per_week (DECIMAL) price_per_month (DECIMAL) And my visitor runs a search like: List the first 10 (pagination) properties where the price per day (price_per_day field) is between 10 and 100 on the period for 1st may until 31 december I know thats a huge query, and I need to paginate the results, so I must do all the calculation and login in only one query... that's why i'm here! :)

    Read the article

  • Hibernate using OneToOne

    - by Soft
    I have two tables tab1 { col1 (PK), col2, col3 } tab2 { col1, col2(PK), col3 } I am using Hibernate annotation for joining using "OneToOne" I have the below Hibernate class for tab1 class tab1 { @OneToOne @JoinColumn(name = "col2", referencedColumnName = "col1") private tab2 t2; } i was expecting to run the below sql select * from tab1 t1, tab2 t2 where t1.col1 = t2.col2 But it is not working as i expected.Please help

    Read the article

  • convert json string into array or object...

    - by qulzam
    I get some json data form the web which is like: [{"pk": 1, "model": "stock.item", "fields": {"style": "f/s", "name": "shirt", "color": "red", "sync": 1, "fabric_code": "0012", "item_code": "001", "size": "34"}}, {"pk": 2, "model": "stock.item", "fields": {"style": "febric", "name": "Trouser", "color": "red", "sync": 1, "fabric_code": "fabric code", "item_code": "0123", "size": "44"}}] How can i use it in the C# winforms desktop application. I already get this data in the form of string. All types of answer are welcome.

    Read the article

  • How get datetime column in sqlite objecite-c

    - by Undolog
    Hi, How do you get a datetime column in sqlite objective-c ? I have a table with 4 fields: pk, datetime, value1 and value 2. pk (primary key), value1 and value2 are integer so I am using: int value1 = sqlite3_column_int(statement, 2); int value1 = sqlite3_column_int(statement, 3); But what should I used for datetime? Thx

    Read the article

  • Cannot resolve collation conflict in Union select

    - by phenevo
    Hi, I've got tqo queries: First doesn't work: select hotels.TargetCode as TargetCode from hotels union all select DuplicatedObjects.duplicatetargetCode as TargetCode from DuplicatedObjects where DuplicatedObjects.objectType=4 because I get error: Cannot resolve collation conflict for column 1 in SELECT statement. Second works: select hotels.Code from hotels where hotels.targetcode is not null union all select DuplicatedObjects.duplicatetargetCode as Code from DuplicatedObjects where DuplicatedObjects.objectType=4 Structure: Hotels.Code -PK nvarchar(40) Hotels.TargetCode - nvarchar(100) DuplicatedObjects.duplicatetargetCode PK nvarchar(100)

    Read the article

  • postgresql insert value in table in serial value

    - by Jesse Siu
    my database using postgresql. the table pk is uing serial value.if i want to insert a record in table, do i need type pk or it will automatic contain id. Can you give me a example about how to insert a record in dataset CREATE TABLE dataset ( id serial NOT NULL, age integer NOT NULL, name character varying(32) NOT NULL, description text NOT NULL DEFAULT ''::text CONSTRAINT dataset_pkey PRIMARY KEY (id ) )

    Read the article

  • Transfer Data between databases with postgres

    - by user227932
    I need to transfer some data from another Database. The old database is called paw1.moviesDB and the new database is paw1. The schema of each table are the following Awards (name of the table)(new DB) Id [PK] Serial Award Nominations (name of the table) (old DB) Id [PK] Serial nominations I want to copy the data from old DB to the new DB.

    Read the article

  • Updating a sql server table with data from another table

    - by David G
    I have two basic SQL Server tables: Customer (ID [pk], AddressLine1, AddressLine2, AddressCity, AddressDistrict, AddressPostalCode) CustomerAddress(ID [pk], CustomerID [fk], Line1, Line2, City, District, PostalCode) CustomerAddress contains multiple addresses for the Customer record. For each Customer record I want to merge the most recent CustomerAddress record where most recent is determined by the highest CustomerAddress ID value. I've currently got the following: UPDATE Customer SET AddressLine1 = CustomerAddress.Line1, AddressPostalCode = CustomerAddress.PostalCode FROM Customer, CustomerAddress WHERE Customer.ID = CustomerAddress.CustomerID which works but how can I ensure that the most recent (highest ID) CustomerAddress record is selected to update the Customer table?

    Read the article

  • make select @@IDENTITY; a long?

    - by acidzombie24
    I am grabbing the last rowid and i am doing this select @@IDENTITY pk = (long)cmd.ExecuteScalar(); I get an invalid typecast bc this is int instead of long. Why doesnt this return a long? can i make it return long? Solution for now is to use pk = Convert.ToInt64(cmd.ExecuteScalar());

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • Accommodating hierarchical data in SQL Server 2005 database design

    - by Remnant
    Context I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on. In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc. In terms of my database design I need to accommodate the following: Each location can have multiple divisions Each division can have multiple departments Each department can have multiple functions Each function can have multiple job roles Each job role can have different course requirements Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ. Edit - updated to better articulate problem Let's assume I am just looking at Location, Division and Department and I have my database as follows: LocationTable DivisionTable DepartmentTable LocationID(PK) DivisionID(PK) DepartmentID(PK) LocationName DivisionName DepartmentName There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions. Suppose I set up a 'Junction Table' as follows: Location_Division LocationID(FK) DivisionID(FK) Using Location_Division I could easily pull back the Divisions for any Location. However, suppose I want to pull back all departments for a given Division in a given Location. If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location? Division_Department DivisionID(FK) DepartmentID(FK) Location_Division Division_Department LocationID DivisionID DivisionID DepartmentID 1 1 1 1 1 2 1 2 2 1 2 1 2 2 2 2 Do I need to expand the number of columns in my 'Junction Table' e.g. Location_Division_Department LocationID(FK) DivisionID(FK) DepartmentID(FK) Location_Division_Department LocationID DivisionID DepartmentID 1 1 1 1 1 2 1 1 3 2 1 1 2 1 2 2 1 3

    Read the article

  • Nhibernate , collections and compositeid

    - by Ciaran
    Hi, banging my head here and thought that some one out there might be able to help. Have Tables below. Bucket( bucketId smallint (PK) name varchar(50) ) BucketUser( UserId varchar(10) (PK) bucketId smallint (PK) ) The composite key is not the problem thats ok I know how to get around this but I want my bucket class to contanin a IList of BucketUser. I read the online reference and thought that I had cracked it but havent. The two mappings are below -- bucket -- <id name="BucketId" column="BucketId" type="Int16" unsaved-value="0"> <generator class="native"/> </id> <property column="BucketName" type="String" name="BucketName"/> <bag name="Users" table="BucketUser" inverse="true" generic="true" lazy="true"> <key> <column name="BucketId" sql-type="smallint"/> <column name="UserId" sql-type="varchar"/> </key> <one-to-many class="Bucket,Impact.Dice.Core" not-found="ignore"/> </bag> -- bucketUser --

    Read the article

  • MYSQL Merging 2 results into 1 table

    - by AlphaRomeo69
    Im building a c# program and am currently stuck at fetching data from MYSQL database and bind them to a grid view. I had been researching for a few days now but to no avail. I have 4 table in the database. table 1 - alpha table 2 - bravo table 3 - charlie table 4 - delta attributes of alpha (id, type, user, role ) attributes of bravo (id, type, date, user) attributes of charlie (id,type, cat, doneby, comment) atttibutes of delta (id,type, cat, doneby) * the pk of alpha and bravo is (id) * the pk of charlie and delta is (id, type) i did a query1 before by inner joinning alpha, bravo and charlie which leads to the sucessful result of (id, type, date, user, role, cat, doneby, comment) and i also did a query2 before by inner joinning alpha, bravo and delta which leads to the sucessful result of (id, type, date, user, role, cat, doneby) Right now, im trying to built a query3 which will merge the result from query1 and query2 together. the result of my attempts leads to (id, type, date, user, role, cat, doneby, comment,id, type, date, user, role, cat, doneby) As i do not want the repeated columns, I would like to seek advice on how to get the result to become like the one below by placing the records as a new tuple in the result table. (id, type, date, user, role, cat, doneby, comment) Thanks! P.S: the PK would not pose a problem due to (id, type)

    Read the article

  • How to Set Customer Table with Multiple Phone Numbers? - Relational Database Design

    - by user311509
    CREATE TABLE Phone ( phoneID - PK . . . ); CREATE TABLE PhoneDetail ( phoneDetailID - PK phoneID - FK points to Phone phoneTypeID ... phoneNumber ... . . . ); CREATE TABLE Customer ( customerID - PK firstName phoneID - Unique FK points to Phone . . . ); A customer can have multiple phone numbers e.g. Cell, Work, etc. phoneID in Customer table is unique and points to PhoneID in Phone table. If customer record is deleted, phoneID in Phone table should also be deleted. Do you have any concerns on my design? Is this designed properly? My problem is phoneID in Customer table is a child and if child record is deleted then i can not delete the parent (Phone) record automatically.

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Force sending a user to custom QuerySet.

    - by Jack M.
    I'm trying to secure an application so that users can only see objects which are assigned to them. I've got a custom QuerySet which works for this, but I'm trying to find a way to force the use of this additional functionality. Here is my Model: class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) assigned_to_user = models.ForeignKey(User, blank=True, null=True, related_name="assigned_inquiries") objects = CustomQuerySetManager() class QuerySet(QuerySet): def for_user(self, user): return self.filter(assigned_to_user=user) (The CustomQuerySetManager is documented over here, if it is important.) I'm trying to force everything to use this filtering, so that other methods will raise an exception. For example: Inquiry.objects.all() ## Should raise an exception. Inquiry.objects.filter(pk=69) ## Should raise an exception. Inquiry.objects.for_user(request.user).filter(pk=69) ## Should work. inqs = Inquiry.objects.for_user(request.user) ## Should work. inqs.filter(pk=69) ## Should work. It seems to me that there should be a way to force the security of these objects by allowing only certain users to access them. I am not concerned with how this might impact the admin interface.

    Read the article

  • Getting the ranking of a photo in SQL

    - by Jake Petroules
    I have the following tables: Photos [ PhotoID, CategoryID, ... ] PK [ PhotoID ] Categories [ CategoryID, ... ] PK [ CategoryID ] Votes [ PhotoID, UserID, ... ] PK [ PhotoID, UserID ] A photo belongs to one category. A category may contain many photos. A user may vote once on any photo. A photo can be voted for by many users. I want to select the ranks of a photo (by counting how many votes it has) both overall and within the scope of the category that photo belongs to. The count of SELECT * FROM Votes WHERE PhotoID = @PhotoID being the number of votes a photo has. I want the resulting table to have generated columns for overall rank, and rank within category, so that I may order the results by either. So for example, the resulting table from the query should look like: PhotoID VoteCount RankOverall RankInCategory 1 48 1 7 3 45 2 5 19 33 3 1 2 17 4 3 7 9 5 5 ... ...you get the idea. How can I achieve this? So far I've got the following query to retrieve the vote counts, but I need to generate the ranks as well: SELECT PhotoID, UserID, CategoryID, DateUploaded, (SELECT COUNT(CommentID) AS Expr1 FROM dbo.Comments WHERE (PhotoID = dbo.Photos.PhotoID)) AS CommentCount, (SELECT COUNT(PhotoID) AS Expr1 FROM dbo.PhotoVotes WHERE (PhotoID = dbo.Photos.PhotoID)) AS VoteCount, Comments FROM dbo.Photos

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >