Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 55/222 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Django: Summing values of records grouped by foreign key

    - by Dan0
    Hi there In django, given the following models (slightly simplified), I'm struggling to work out how I would get a view including sums of groups class Client(models.Model): api_key = models.CharField(unique=True, max_length=250, primary_key=True) name = models.CharField(unique=True, max_length=250) class Purchase(models.Model): purchase_date = models.DateTimeField() client = models.ForeignKey(SavedClient, to_field='api_key') amount_to_invoice = models.FloatField(null=True) For a given month, I'd like to see e.g. April 2010 For Each Client that purchased this month: * CLient: Name * Total amount of Purchases for this month * Total cost of purchases for this month For each Purchase made by client: * Date * Amount * Etc I've been looking into django annotation, but can't get my head around how to sum values of a field for a particular group over a particular month and send the information to a template as a variable/tag. Any info would be appreciated

    Read the article

  • Atomic INSERT/SELECT in HSQLDB

    - by PartlyCloudy
    Hello, I have the following hsqldb table, in which I map UUIDs to auto incremented IDs: SHORT_ID (BIG INT, PK, auto incremented) | UUID (VARCHAR, unique) Create command: CREATE TABLE table (SHORT_ID BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, UUID VARCHAR(36) UNIQUE) In order to add new pairs concurrently, I want to use the atomic MERGE INTO statement. So my (prepared) statement looks like this: MERGE INTO table USING (VALUES(CAST(? AS VARCHAR(36)))) AS v(x) ON ID_MAP.UUID = v.x WHEN NOT MATCHED THEN INSERT VALUES v.x When I execute the statement (setting the placeholder correctly), I always get a Caused by: org.hsqldb.HsqlException: row column count mismatch Could you please give me a hint, what is going wrong here? Thanks in advance.

    Read the article

  • Firebird Insert Distinct Data Using ZeosLib and Delphi

    - by Brad
    I'm using Zeos 7, and Delphi 2K9 and want to check to see if a value is already in the database under a specific field before I post the data to the database. Example: Field Keyword Values of Cheese, Mouse, Trap tblkeywordKEYWORD.Value = Cheese What is wrong with the following? And is there a better way? zQueryKeyword.SQL.Add('IF NOT EXISTS(Select KEYWORD from KEYWORDLIST ='''+ tblkeywordKEYWORD.Value+''')INSERT into KEYWORDLIST(KEYWORD) VALUES ('''+ tblkeywordKEYWORD.Value+'''))'); zQueryKeyword.ExecSql; I tried using the unique constraint in IB Expert, but it gives the following error: Invalid insert or update value(s): object columns are constrained - no 2 table rows can have duplicate column values. attempt to store duplicate value (visible to active transactions) in unique index "UNQ1_KEYWORDLIST". Thanks for any help -Brad

    Read the article

  • C# Conditional Operator Not a Statement?

    - by abelenky
    I have a simple little code fragment that is frustrating me: HashSet<long> groupUIDs = new HashSet<long>(); groupUIDs.Add(uid)? unique++ : dupes++; At compile time, it generates the error: Only assignment, call, increment, decrement, and new object expressions can be used as a statement HashSet.Add is documented to return a bool, so the ternary (?) operator should work, and this looks like a completely legitimate way to track the number of unique and duplicate items I add to a hash-set. When I reformat it as a if-then-else, it works fine. Can anyone explain the error, and if there is a way to do this as a simple ternary operator?

    Read the article

  • Understanding glm$residuals and resid(glm)

    - by Michael Bishop
    Hi, Can you tell me what is returned by glm$residuals and resid(glm) where glm is a quasipoisson object. e.g. How would I create them using glm$y and glm$linear.predictors. glm$residuals n missing unique Mean .05 .10 .25 .50 .75 .90 .95 37715 10042 2174 -0.2574 -2.7538 -2.2661 -1.4480 -0.4381 0.7542 1.9845 2.7749 lowest : -4.243 -3.552 -3.509 -3.481 -3.464 highest: 8.195 8.319 8.592 9.089 9.416 resid(glm) n missing unique Mean .05 .10 .25 37715 0 2048 -2.727e-10 -1.0000 -1.0000 -0.6276 .50 .75 .90 .95 -0.2080 0.4106 1.1766 1.7333 lowest : -1.0000 -0.8415 -0.8350 -0.8333 -0.8288 highest: 7.2491 7.6110 7.6486 7.9574 10.1932

    Read the article

  • how to do this in shell

    - by user150674
    I have a very large file, named 'ColCheckMe', tab-delimited, that you are asked to process. You are told that each line in 'ColCheckMe' has 7 columns, and that the values in the 5th column are integers. Using shell functions indicate how you would verify that these conditions are satisfied in 'ColCheckMe' if In the same file, each value in column 1 is unique. How would I verify that? Also how to write a shell function that counts the number of occurrences of the word “SpecStr” in the file 'ColCheckMe' I tried the first part which checks for the valid number of field and checks the 5th field being integer field. nawk ' NF != 7 { printf("[%d] has invalid [%d] number of fields\n", FNR, NF) } $5 !~ /^[0-9]+$/ { printf("[%d] 5th field is invalid [%s]\n", FNR, $5) }' ColCheckMe now i wanna verify in the same file if the value in column 1 is unique. Also is there a way to write a shell function to count the occurrences of the world "SpecStr" in the file 'ColCheckMe' Thanks a lot

    Read the article

  • Recommendations on SharePoint site permission model

    - by Sachin
    Hi All, I have a SharePoint site which contains a root site and site collection in it. Now there are some sites that inherits permissions from their parent site and some site has their own permission module. Now a user from owner group of root site browses site collection but there are few site which doesn't allow user to view the content of it. Now what I want is general recommendation on when creating a new site in SharePoint what is best possible approach to set site permission. In what case we can inherits permissions from parent site..? In what case we can we us unique permission for a site..? If a site has unique permission set then is it possible to creat a group at root level which has access to all site collection irrespective of site permission model? I want a general recommendation based on above scenario. Any help will be appriciable. Thanks Sachin

    Read the article

  • insert into sql table column as GUID

    - by loviji
    I have tried with ado.net create table columnName with unique name. as uniquename I use new Guid() Guid sysColumnName = new Guid(); sysColumnName = Guid.NewGuid(); string stAddColumn = "ALTER TABLE " + tableName + " ADD " + sysColumnName.ToString() + " " + convertedColumnType + " NULL"; SqlCommand cmdAddColumn = new SqlCommand(stAddColumn, con); cmdAddColumn.ExecuteNonQuery(); con.Close(); and it fails: System.Data.SqlClient.SqlException: Incorrect syntax near '-'. ? System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) ? System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) now question, how can i fix it, or how can use different way to create unique column?

    Read the article

  • text indexes vs integer indexes in mysql

    - by imanc
    Hey, I have always tried to have an integer primary key on a table no matter what. But now I am questioning if this is always necessary. Let's say I have a product table and each product has a globally unique SKU number - that would be a string of say 8-16 characters. Why not make this the PK? Typically I would make this field a unique index but then have an auto incrementing int field as the PK, as I assumed it would be faster, easier to maintain, and would allow me to do things like get the last 5 records added with ease. But in terms of optimisation, assuming I'd only ever be matching the full text field and next doing text matching queries (e.g. like %%) can you guys think of any reasons not to use a text based primary key, most likely of type varchar()? Cheers, imanc

    Read the article

  • mysql first record retrieval

    - by Sammy
    While very easy to do in Perl or PHP, I cannot figure how to use mysql only to extract the first unique occurence of a record. For example, given the following table: Name Date Time Sale John 2010-09-12 10:22:22 500 Bill 2010-08-12 09:22:37 2000 John 2010-09-13 10:22:22 500 Sue 2010-09-01 09:07:21 1000 Bill 2010-07-25 11:23:23 2000 Sue 2010-06-24 13:23:45 1000 I would like to extract the first record for each individual in asc time order. After sorting the table is ascending time order, I need to extract the first unique record by name. So the output would be : Name Date Time Sale John 2010-09-12 10:22:22 500 Bill 2010-07-25 11:23:23 2000 Sue 2010-06-24 13:23:45 1000 Is this doable in an easy fashion with mySQL? Thanks, Sammy

    Read the article

  • Compound IDENTITY column in SQL SERVER 2008

    - by Asaf R
    An Orders table has a CustomerId column and an OrderId column. For certain reasons it's important that OrderId is no longer than 2-bytes. There will be several million orders in total, which makes 2-bytes not enough. A customer will have no more than several thousand orders making 2-bytes enough. The obvious solution is to have the (CustomerId, OrderId) be unique rather than (OrderId) itself. The problem is generating the next Customer's OrderId. Preferably, without creating a separate table for each customer (even if it contains only an IDENTITY column), in order to keep the upper layers of the solution simple. Q: how would you generate the next OrderId so that (CustomerId, OrderId) is unique but OrderId itself is allowed to have repetitions? Does Sql Server 2008 have a built in functionality for doing this? For lack of a better term I'm calling it a Compound IDENTITY column.

    Read the article

  • SQL COUNT of COUNT

    - by cryptic-star
    I have some data I am querying. The table is composed of two columns - a unique ID, and a value. I would like to count the number of times each unique value appears (which can easily be done with a COUNT and GROUP BY), but I then want to be able to count that. So, I would like to see how many items appear twice, three times, etc. So for the following data (ID, val)... 1, 2 2, 2 3, 1 4, 2 5, 1 6, 7 7, 1 The intermediate step would be (val, count)... 1, 3 2, 3 7, 1 And I would like to have(count_from_above, new_count)... 3, 2 -- since three appears twice in the previous table 1, 1 -- since one appears once in the previous table Is there any query which can do that? If it helps, I'm working with Postgres. Thanks!

    Read the article

  • SEO Canonical Issue resolution on iis

    - by kacalapy
    i have a site running on IIS that i have Canonical Issue with. the error is: The page with URL "http://www.site.org/images/join_forum.gif" can also be accessed by using URL "https://www.site.org/images/join_forum.gif".Search engines identify unique pages by using URLs. When a single page can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format. how can i resolve this?

    Read the article

  • Fast permutation -> number -> permutation mapping algorithms

    - by ijw
    I have n elements. For the sake of an example, let's say, 7 elements, 1234567. I know there are 7! = 5040 permutations possible of these 7 elements. I want a fast algorithm comprising two functions: f(number) maps a number between 0 and 5039 to a unique permutation, and f'(permutation) maps the permutation back to the number that it was generated from. I don't care about the correspondence between number and permutation, providing each permutation has its own unique number. So, for instance, I might have functions where f(0) = '1234567' f'('1234567') = 0 The fastest algorithm that comes to mind is to enumerate all permutations and create a lookup table in both directions, so that, once the tables are created, f(0) would be O(1) and f('1234567') would be a lookup on a string. However, this is memory hungry, particularly when n becomes large. Can anyone propose another algorithm that would work quickly and without the memory disadvantage?

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • .NET SortedDictionary But Sorted By Values

    - by Michael Covelli
    I need a data structure that acts like a SortedDictionary<int, double> but is sorted based on the values rather than the keys. I need it to take about 1-2 microseconds to add and remove items when we have about 3000 items in the dictionary. My first thought was simply to switch the keys and values in my code. This very nearly works. I can add and remove elements in about 1.2 microseconds in my testing by doing this. But the keys have to be unique in a SortedDictionary so that means that values in my inverse dictionary would have to be unique. And there are some cases where they may not be. Any ideas of something in the .NET libraries already that would work for me?

    Read the article

  • How to set a control ID in ListView Template dynamically?

    - by citronas
    A have a ListView that is rendered with multiple items. Now I want to toggle some HTML attributes with JQuery. Therefore it would be best to have access to these elements via an unique ID. But trying to create a "dynamic" and therefore unique ID by <tr runat="server" ID='<%# this.GetUniqueID() %>'> </tr> results in an error that tells me that the ID needs to be simple and cannot be set by a call to a method. I know that I can dynamically create controls in the code-behind and set the ID there. But in this case, I'd rather like to let the content be rendered by the ListView itself. That brings me to the conclusion that the idea of setting a dynically ID in the Template is totally wrong. How can I achieve the desired behaviour?

    Read the article

  • Strategies to use Database Sequences?

    - by Bruno Brant
    Hello all, I have a high-end architecture which receives many requests every second (in fact, it can receive many requests every millisecond). The architecture is designed so that some controls rely on a certain unique id assigned to each request. To create such UID we use a DB2 Sequence. Right now I already understand that this approach is flawed, since using the database is costly, but it makes sense to do so because this value will also be used to log information on the database. My team has just found out an increase of almost 1000% in elapsed time for each transaction, which we are assuming happened because of the sequence. Now I wonder, using sequences will serialize access to my application? Since they have to guarantee that increments works the way they should, they have to, right? So, are there better strategies when using sequences? Please assume that I have no other way of obtaining a unique id other than relying on the database.

    Read the article

  • iPhone: Grouped Table Confusion

    - by senfo
    I'm trying to follow along with this UITableView tutorial and I'm afraid I'm getting confused. What I hope to accomplish is to create a grouped table, which resembles the look and feel of the following image (image source): For each "break" in the table, I assume that I need another section. I understand there is a global NSMutableArray, which acts as the data source to the UITableView. Inside this global array, there are dictionaries for sections and for data, but this is where I start getting confused. Should I create a unique table to store each section in addition to a new table for the rows that are part of that section? In other words, if I have five sections, will I end up with ten unique collections of data (one for each section and one to store the rows related to the section), which all eventually get stored in the global array?

    Read the article

  • Doctrine Mssql uniqueidentifier isn't cast as char or nvarchar when retrieved from the database.

    - by Tres
    When I retrieve a record from the database which has a column of type "uniqueidentifier", Doctrine fills it with "null" rather than the unique id from the database. Some research and testing has brought this down to a PDO/dblib driver issue. When directly querying via PDO, null is returned in place of the unique id. For reference, http://trac.doctrine-project.org/ticket/1096, has a bit on this, however, it was updated 11 months ago with no comment for resolution. A way around this, as mentioned at http://bugs.php.net/bug.php?id=24752&edit=1, is to cast the column as a char. However, it doesn't seem Doctrine exposes the native field type outside of generating models which makes it a bit hard to detect uniqueidentifier types and cast them internally when building the sql query. Has anyone found a workaround for this?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • Traceability with XSD

    - by blastthisinferno
    I am trying to let my XML schema handle a little traceability functionality as I'm gathering requirements while I read through some functional specifications. (Not ideal for requirement management, but at least its a start.) What I'm doing is creating a <functionalSpec tag for each functional specification I am currently reading through. I create a <requirement tag for each requirement I find. Since I want to be able to trace where the requirement came from, I create a <trace element with the id of the <functionalSpec element. Instead of allowing myself to enter any plain-old-text in the <functionalSpecId tag, I want the XSD to validate and make sure that I only enter in an id that exists for an existing functional spec. My problem is coming in where it seems the XML Schema W3C Recommendations documentation says that what I want to do is not possible. (about 1/2 way down) {selector} specifies a restricted XPath ([XPath]) expression relative to instances of the element being declared. This must identify a node set of subordinate elements (i.e. contained within the declared element) to which the constraint applies. I'm using Oxygen to create this since I'm fairly new to XSD files, and it gives me the following error: E [Xerces] Identity Constraint error: identity constraint "KeyRef@1045a2" has a keyref which refers to a key or unique that is out of scope. So my question is does anyone know of a way that will allow me to use the same XML structure that I have below through using XSD? Below is the XML file. <?xml version="1.0" encoding="UTF-8" ?> <srs xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="srs req2.xsd" xmlns="srs"> <requirements> <requirement DateCreated="2010-06-11" id="1"> <Text>The system shall...</Text> <trace> <functionalSpecId>B010134</functionalSpecId> </trace> <revisions> <revision date="2010-06-11" num="0"> <description>Initial creation.</description> </revision> </revisions> </requirement> </requirements> <functionalSpecs> <functionalSpec id="B010134" model="Model-T"> <trace> <meeting></meeting> </trace> <revisions> <revision date="2009-07-08" num="0"> <description>Initial creation.</description> </revision> <detailer>Me</detailer> <engineer>Me</engineer> </revisions> </functionalSpec> </functionalSpecs> </srs> Below is the XSD file. <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="srs" xmlns="srs" xmlns:srs="srs" elementFormDefault="qualified"> <!-- SRS --> <xs:element name="srs" type="SRSType"> </xs:element> <xs:complexType name="SRSType"> <xs:sequence> <xs:element ref="requirements" /> <xs:element ref="functionalSpecs" /> </xs:sequence> </xs:complexType> <!-- Requirements --> <xs:element name="requirements" type="RequirementsType"> <xs:unique name="requirementId"> <xs:selector xpath="srs/requirements/requirement" /> <xs:field xpath="@id" /> </xs:unique> </xs:element> <xs:complexType name="RequirementsType"> <xs:choice maxOccurs="unbounded"> <xs:element name="requirement" type="RequirementType" /> </xs:choice> </xs:complexType> <xs:complexType name="RequirementType"> <xs:complexContent> <xs:extension base="RequirementInfo"> <xs:sequence> <xs:element name="trace" type="TraceType" maxOccurs="unbounded" minOccurs="1" /> <xs:element name="revisions" type="RequirementRevisions" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="RequirementRevisions"> <xs:sequence> <xs:element name="revision" type="RevisionInfo" minOccurs="1" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> <xs:complexType name="RequirementInfo"> <xs:sequence> <xs:element name="Text" type="Description" /> </xs:sequence> <xs:attribute name="DateCreated" type="xs:date" use="required" /> <xs:attribute name="id" type="xs:integer" use="required" /> </xs:complexType> <!-- Functional Specs --> <xs:element name="functionalSpecs" type="FunctionalSpecsType"> <xs:unique name="functionalSpecId"> <xs:selector xpath="srs/functionalSpecs/functionalSpec" /> <xs:field xpath="@id" /> </xs:unique> </xs:element> <xs:complexType name="FunctionalSpecsType"> <xs:choice maxOccurs="unbounded"> <xs:element name="functionalSpec" type="FunctionalSpecType" /> </xs:choice> </xs:complexType> <xs:complexType name="FunctionalSpecType"> <xs:complexContent> <xs:extension base="FunctionalSpecInfo"> <xs:sequence> <xs:element name="trace" type="TraceType" maxOccurs="unbounded" minOccurs="1" /> <xs:element name="revisions" type="FunctionalSpecRevisions" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="FunctionalSpecRevisions"> <xs:sequence> <xs:element name="revision" type="RevisionInfo" minOccurs="1" maxOccurs="unbounded" /> <xs:element name="detailer" type="xs:string" /> <xs:element name="engineer" type="xs:string" /> </xs:sequence> </xs:complexType> <xs:complexType name="FunctionalSpecInfo"> <xs:attribute name="id" type="xs:string" use="required" /> <xs:attribute name="model" type="xs:string" use="required" /> </xs:complexType> <!-- Requirements, Functional Specs --> <xs:complexType name="TraceType"> <xs:choice> <xs:element name="requirementId"> <xs:keyref refer="requirementId" name="requirementIdRef"> <xs:selector xpath="srs/requirements/requirement" /> <xs:field xpath="@id" /> </xs:keyref> </xs:element> <xs:element name="functionalSpecId"> <xs:keyref refer="functionalSpecId" name="functionalSpecIdRef"> <xs:selector xpath="srs/functionalSpecs/functionalSpec" /> <xs:field xpath="@id" /> </xs:keyref> </xs:element> <xs:element name="meeting" /> </xs:choice> </xs:complexType> <!-- Common --> <xs:complexType name="RevisionInfo"> <xs:choice> <xs:element name="description" type="Description" /> </xs:choice> <xs:attribute name="date" type="xs:date" use="required" /> <xs:attribute name="num" type="xs:integer" use="required" /> </xs:complexType> <xs:complexType name="Description" mixed="true"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="Date" type="xs:date" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:schema>

    Read the article

  • MSBuild: Items + Batching + CreateItem + Transforms Question

    - by KeithCS
    I have this bit of an msbuild project that is making me wonder why it the outcome is the way it is. Not that it is causing an issue or anything of the sort, but I would like to try and better my understanding of it. <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="TestTarget1;TestTarget2" ToolsVersion="3.5"> <ItemGroup> <PathDir Include="C:\RootDir\UniqueDir1"/> <PathDir Include="C:\RootDir\UniqueDir2" /> </ItemGroup> <Target Name="TestTarget1" Outputs="%(PathDir.Identity)"> <PropertyGroup> <RootPath>%(PathDir.Identity)</RootPath> </PropertyGroup> <ItemGroup> <SubDirectory Include="Common1"/> <SubDirectory Include="Common2"/> </ItemGroup> <CreateItem Include="@(SubDirectory->'$(RootPath)\%(Identity)')"> <Output TaskParameter="Include" ItemName="FullPath"/> </CreateItem> <Message Text="@(FullPath)"/> </Target> <Target Name="TestTarget2"> <Message Text="@(FullPath)"/> </Target> </Project> So I have two main paths that are unique, and within each I have two directories with the same names in each of the unique paths. In target1, I am batching against the identity of the items in PathDir, and then performing a transform on item SubDirectory, which contains the common folder names found in the unique directories, to create a new item containing the full paths. So anyways, after that, the output for the targets is as follows: Target 1: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2 C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 Target 2: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2;C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 So my question I guess is ... why does target1 only display the directories containing the directory it is batching against? I know it probably has to do with batching, but thats all I know.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >