Search Results

Search found 10424 results on 417 pages for 'persisted column'.

Page 366/417 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • BeautifulSoup can't parse a webpage?

    - by JLTChiu
    I am using beautiful soup for parsing webpage now, I've heard it's very famous and good, but it doesn't seems works properly. Here's what I did import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen("http://www.cnn.com/2012/10/14/us/skydiver-record-attempt/index.html?hpt=hp_t1") soup = BeautifulSoup(page) print soup.prettify() I think this is kind of straightforward. I open the webpage and pass it to the beautifulsoup. But here's what I got: Warning (from warnings module): File "C:\Python27\lib\site-packages\bs4\builder\_htmlparser.py", line 149 "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) ... HTMLParseError: bad end tag: u'</"+"script>', at line 634, column 94 I thought CNN website should be well designed, so I am not very sure what's going on though. Does anyone has idea about this?

    Read the article

  • How to display custom processing message in JQuery datatables

    - by Sukhi
    i am using datatables api to display data in my asp.net4.0 application; datatables I have one column [ Delete ] to delete the row data.when i click on this link i send a jquery ajax request to delete the row from database. I want to display a message such as [ Deleting record... ] to the end user until data deleted by server side processing. I put a div on my page and write a message [ Deleting record... ] in a div when i click on delete link i display that message but when delete operation complete it also display a message [ Processing... ](which is inbuilt message of datatables) which looks like odd as two message are displaying. What can i do better to display message to the end user. JSCode $('#tblVideoList .delete').live('click', function (e) { e.preventDefault(); var oTable = $('#tblVideoList').dataTable(); var aPos = oTable.fnGetPosition(this.parentNode); var aData = oTable.fnGetData(aPos[0]); if (confirm('Are you sure want to delete the record.')) { $("#divDelete").show(); var today = new Date(); $.ajax({ type: "GET", cache: false, url: "samplepage.aspx", success: function (msg) { $("#divDelete").hide(); oTable.fnDraw(); } }); } return false; }); Thanks

    Read the article

  • Auditing in Entity Framework.

    - by Gabriel Susai
    After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework. I want to store each column values that is created or updated to a different audit table. Rightnow I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc. Below is the sample code I am working on it. //Get the changed entires( ie, records) IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified); //Iterate each ObjectStateEntry( for each record in the update/modified collection) foreach (ObjectStateEntry entry in changes) { //Iterate the columns in each record and get thier old and new value respectively foreach (var columnName in entry.GetModifiedProperties()) { string oldValue = entry.OriginalValues[columnName].ToString(); string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, oldvalue, newvalue } } changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added); foreach (ObjectStateEntry entry in changes) { if (entry.IsRelationship) continue; var columnNames = (from p in entry.EntitySet.ElementType.Members select p.Name).ToList(); foreach (var columnName in columnNames) { string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, value } }

    Read the article

  • JqGrid updating grid on add

    - by CSharpAtl
    Scenario: I have three columns in my grid but only one is editable, the other two are filled in on the server side. I am using the built in add functionality of jqGrid and NOT refreshing the grid on successfully add. I would like to have the row added to the grid, like it does automatically, but would like to add it myself because it only adds the one column that is marked 'editable'. I cannot seem to find a way to block the row from being automatically added to the grid, or a way to override the built in add functionality in the grid. My idea was to add the row myself because I will have received back the full row of data back on my submit. Questions: Is there a way to stop the grid from automatically adding the row when I do not want a grid refresh, so that I can manually add all the data for the row? Is it possible to use the built in add button and override the onClick, without digging and and directly figuring out what jqGrid calls the button? Any better ideas on how to accomplish getting the row added to the grid from the server side without doing it all manually...ie. create my own add button, and popup a dialog and handle all the submit functionality? EDIT What would help is if I could stop the grid from auto adding the row...I can deal with doing it myself.

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • rails: has_many :through validation?

    - by ramonrails
    Rails 2.1.0 (Cannot upgrade for now due to several constraints) I am trying to achieve this. Any hints? A project has many users through join model A user has many projects through join model Admin class inherits User class. It also has some Admin specific stuff. Admin like inheritance for Supervisor and Operator Project has one Admin, One supervisor and many operators. Now I want to 1. submit data for project, admin, supervisor and operator in a single project form 2. validate all and show errors on the project form. Project has_many :projects_users ; has_many :users, :through => :projects_users User has_many :projects_users ; has_many :projects, :through => :projects_users ProjectsUser = :id integer, :user_id :integer, :project_id :integer, :user_type :string ProjectUser belongs_to :project, belongs_to :user Admin < User # User has 'type:string' column for STI Supervisor < User Operator < User Is the approach correct? Any and all suggestions are welcome.

    Read the article

  • Virgin STI Help

    - by Mutuelinvestor
    I am working on a horse racing application and I'm trying to utilize STI to model a horse's connections. A horse's connections is comprised of his owner, trainer and jockey. Over time, connections can change for a variety of reasons: The horse is sold to another owner The owner switches trainers or jockey The horse is claimed by a new owner As it stands now, I have model this with the following tables: horses connections (join table) stakeholders (stakeholder has three sub classes: jockey, trainer & owner) Here are my clases and associations: class Horse < ActiveRecord::Base has_one :connection has_one :owner_stakeholder, :through => :connection has_one :jockey_stakeholder, :through => :connection has_one :trainer_stakeholder, :through => :connection end class Connection < ActiveRecord::Base belongs_to :horse belongs_to :owner_stakeholder belongs_to :jockey_stakeholder belongs_to :trainer_stakeholder end class Stakeholder < ActiveRecord::Base has_many :connections has_many :horses, :through => :connections end class Owner < Stakeholder # Owner specific code goes here. end class Jockey < Stakeholder # Jockey specific code goes here. end class Trainer < Stakeholder # Trainer specific code goes here. end One the database end, I have inserted a Type column in the connections table. Have I modeled this correctly. Is there a better/more elegant approach. Thanks in advance for you feedback. Jim

    Read the article

  • populate textboxs with xml node attributes

    - by Doug
    I have Data.xml: <?xml version="1.0" encoding="utf-8" ?> <data> <album> <slide title="Autum Leaves" description="Leaves from the fall of 1986" source="images/Autumn Leaves.jpg" thumbnail="images/Autumn Leaves_thumb.jpg" /> <slide title="Creek" description="Creek in Alaska" source="images/Creek.jpg" thumbnail="images/Creek_thumb.jpg" /> </album> </data> I'd like to be able to edit the attributes of each Slide node via GridView (that has a "Select" column added.) And so far I have: protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { int selectedIndex = GridView1.SelectedIndex; LoadXmlData(selectedIndex); } private void LoadXmlData(int selectedIndex) { XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(MapPath(@"..\photo_gallery\Data.xml")); XmlNodeList nodelist = xmldoc.DocumentElement.ChildNodes; XmlNode xmlnode = nodelist.Item(selectedIndex); titleTextBox.Text = xmlnode.Attributes["title"].InnerText; descriptionTextBox.Text = xmlnode.Attributes["description"].InnerText; sourceTextBox.Text = xmlnode.Attributes["source"].InnerText; thumbTextBox.Text = xmlnode.Attributes["thumbnail"].InnerText; The code for LoadXmlData is just a guess on my part - I'm new to working with xml in this way. I'd like have the user to slected the row from the gridview, then populate a set of text boxes with each slide attributed for updating back to the Data.xml file. The error I'm getting is "Object reference not set to an instance of an object" at the line: titleTextBox.Text = xmlnode.Attributes["@title"].InnerText; - so I'm not reaching the attribute "title" of the slide node. Thanks for any ideas you may have.

    Read the article

  • Complete failure to compile when include CSS Friendly Adapters

    - by david
    Background - I am trying to use the friendly adapters to override the default styling for the standard asp.net menu control that is used by an existing project. The existing project functions normally and compiles when requested without incident. Adding in the code for the for the CSS Friendly adapter and not only does it not compile, but it never even really starts. The Problem in Detail - I am using the sample code from Scott on this page: http://weblogs.asp.net/scottgu/archive/2006/09/08/CSS-Control-Adapter-Toolkit-Update.aspx. The sample project compiles fine, just within the existing project does it fail. Fails without a line number or any other traceable info. It definately appears to be related to the CSSMenuAdapter.browser file, which has been referenced by others online as the cause of similar error. I have tried addind and readding, using as a dll, using as a code file in app code, etc. I am working with aspdotnetstorefront in this case, although it is not unique to them as I have found other references in software packages online. Only thing is, no one ever says what solved the issue. I am using Windows 7, VS2008 Express and SQL Express 2008 R2. The full error msg is: Error 10 Exception of type 'System.OutOfMemoryException' was thrown. Notice that there is no file, line, or column info. Really need some help here. I have been working on this a long time. This really should have tag: cssfriendlyadapter but I could not create that.

    Read the article

  • Discriminator based on joined property

    - by Andrew
    Suppose I have this relationship: abstract class Base { int Id; int JoinedId; ... } class Joined { int Id; int Discriminator; ... } class Sub1 : Base { ... } class Sub2 : Base { ... } for the following tables: table Base ( Id int, JoinedId int, ... ) table Joined ( Id int, Discriminator int, ... ) I would like to set up a table-per-hierarchy inheritance mapping for the Base, Sub1, Sub2 relationships, but using the Disciminator property from the Joined class as the discriminator. Here's the general idea for the mapping file: <class name="Base" table="Base"> <id name="Id"><generator class="identity"/></id> <discriminator /> <!-- ??? or <join> or <many-to-one>? --> <subclass name="Sub1" discriminator-value="1">...</subclass> <subclass name="Sub2" discriminator-value="2">...</subclass> </class> Is there any way of accomplishing something like this with the <discriminator>, <join>, or <many-to-one>? NHiberante seems to assume the discriminator is a column on the given table (which makes sense to me.. I know this is unorthodox). Thanks.

    Read the article

  • Error: "Cannot simultaneously fetch multiple bags" when calling Configuration.BuildSessionFactory();

    - by Nick Meldrum
    We are getting this error after upgrading to NHibernate 2.1. [QueryException: Cannot simultaneously fetch multiple bags.] NHibernate.Loader.BasicLoader.PostInstantiate() +418 NHibernate.Loader.Entity.EntityLoader..ctor(IOuterJoinLoadable persister, String[] uniqueKey, IType uniqueKeyType, Int32 batchSize, LockMode lockMode, ISessionFactoryImplementor factory, IDictionary`2 enabledFilters) +123 NHibernate.Loader.Entity.BatchingEntityLoader.CreateBatchingEntityLoader(IOuterJoinLoadable persister, Int32 maxBatchSize, LockMode lockMode, ISessionFactoryImplementor factory, IDictionary`2 enabledFilters) +263 NHibernate.Persister.Entity.AbstractEntityPersister.CreateEntityLoader(LockMode lockMode, IDictionary`2 enabledFilters) +26 NHibernate.Persister.Entity.AbstractEntityPersister.CreateLoaders() +57 NHibernate.Persister.Entity.AbstractEntityPersister.PostInstantiate() +1244 NHibernate.Persister.Entity.SingleTableEntityPersister.PostInstantiate() +18 NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) +3261 NHibernate.Cfg.Configuration.BuildSessionFactory() +87 Without stepping into the NHibernate source, it doesn't look like I can see which mapping is creating the issue. It's a very old application with a load of mappings files, lots of mappings have one-to-many bags in them, all lazy instantiated. For example: <bag name="Ownership" lazy="true" cascade="all" inverse="true" outer-join="auto" where="fkOwnershipStatusID!=6"> <key column="fkStakeHolderID"/> <one-to-many class="StakeholderLib.Ownership,StakeholderLib" /> </bag> maps to: public virtual IList Ownership { get { if (ownership == null) ownership = new ArrayList(); return ownership; } set { ownership = value; } } Has anyone seen this error before when upgrading to NHibernate 2.1?

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • How to extract a 2x2 submatrix from a bigger matrix

    - by ZaZu
    Hello, I am a very basic user and do not know much about commands used in C, so please bear with me...I cant use very complicated codes. I have some knowledge in the stdio.h and ctype.h library, but thats about it. I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a specific 2 by 2 submatrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for row and 3 for column, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Not only this isnt a 2 by 2 submatrix, but even if I wanted the first 3 rows and first 3 columns, its not printing it correctly.... I need to start at row 3 and col 3, then take the 2 by 2 submatrix ! I should have ended up with : 4.00 23.00 352.00 6.00 I heard that I can use fgets and sscanf to accomplish this. Here is my trial code : fgets(garbage,1,fin); sscanf(garbage,"\n"); But this doesnt work either :( What am I doing wrong ? Please help. Thanks !

    Read the article

  • Google App Engine Needs Index Error

    - by Andrew Johnson
    I am currently getting a needs index error on my app engine app: http://www.gaiagps.com/wiki/home. I believe this index should have been created automatically by my index.yaml file (see below). Googling a bit, I think I just need to wait for my index to be built. Is this correct, or do I need to do something manually? Is there some sort of index-building queue? My tables are very, very small right now. EDIT: I added the line "indexes:" to my app.yaml, and now app engine reports the index is building, so I think this is fixed. It's weird that this file was wrong considering I've never touched it. indexes: # AUTOGENERATED # This index.yaml is automatically updated whenever the dev_appserver # detects that a new type of query is run. If you want to manage the # index.yaml file manually, remove the above marker line (the line # saying "# AUTOGENERATED"). If you want to manage some indexes # manually, move them above the marker line. The index.yaml file is # automatically uploaded to the admin console when you next deploy # your application using appcfg.py. - kind: Revision properties: - name: name - name: created The app works on my dev server, but not in production. However, on my dev console, I have noticed this error (EDIT: THIS ERROR IS GONE NOW THAT I ADDED indexes: to the app.yaml file above): ERROR 2009-10-18 04:46:51,908 dev_appserver_index.py:176] Error parsing /gaiagps.com/index.yaml: 'NoneType' object is not callable in "<string>", line 13, column 3: - kind: Revision ^

    Read the article

  • Hibernate - Problem in parsing mapping file (.hbm.xml)

    - by Yatendra Goel
    I am new to Hibernate. I have an exception while running an Hibernate-based application. The exception is as follows: 16 [main] INFO org.hibernate.cfg.Environment - Hibernate 3.3.2.GA 16 [main] INFO org.hibernate.cfg.Environment - hibernate.properties not found 16 [main] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 31 [main] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 94 [main] INFO org.hibernate.cfg.Configuration - configuring from resource: /hibernate.cfg.xml 94 [main] INFO org.hibernate.cfg.Configuration - Configuration resource: /hibernate.cfg.xml 219 [main] INFO org.hibernate.cfg.Configuration - Reading mappings from resource : app/data/City.hbm.xml 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(12) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(13) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(14) Attribute "coloumn" must be declared for element type "property". It seems that it is not finding coloumn attribute of the property element in the mappings file but my mappings file do have the coloumn attribute. Below is the mappings file (City.hbm.xml) <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="app.data"> <class name="City" table="CITY"> <id column="CITY_ID" name="cityId"> <generator class="native"/> </id> <property name="cityDisplyaName" coloumn="CITY_DISPLAY_NAME" /> <property coloumn="CITY_MEANINGFUL_NAME" name="cityMeaningFulName" /> <property coloumn="CITY_URL" name="cityURL" /> </class> </hibernate-mapping>

    Read the article

  • Combining aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using using a domain language to query the DB. The 'language' comprises of boolean expressions (or more specifically SQL like criteria) which I then 'translate' back into pure (ANSI) SQL and send to the underlying Db. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • How can I line up columns with Perl's printf when the value might be a number or a string?

    - by user317203
    I am trying to output a document that looks like this (more at http://pastebin.com/dpBAY8Sb): 10.1.1.1 100 <unknown> <unknown> <unknown> <unknown> <unknown> 72.12.148.186 94 Canada Hamilton ON 43.250000 -79.833300 0.00 72.68.209.149 24 United States Richmond Hill NY 40.700500 -73.834500 611.32 72.192.33.34 4 United States Rocky Hill CT 41.657800 -72.662700 657.48 I cannot find how to format the output I have to have a floating poing and format the distance between columns. My current code looks something like this. if (defined $longitude){ printf FILE ("%-8s %.6f","",$longitude); }else{ $longitude = "<unknown>"; printf FILE ("%-20s ",$longitude); } The extra "" throws off the whole column and it looks like this (more at http://pastebin.com/kcwHyNwb). 10.1.1.1 100 <unknown> <unknown> <unknown> <unknown> <unknown> 72.12.148.186 94 Canada Hamilton ON 43.250000 -79.833300 0.00 72.68.209.149 24 United States Richmond Hill NY 40.700500 -73.834500 571.06

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

  • ASP.Net Problem with Event Handlers and Control Creation Timing

    - by Oliver Weichhold
    What I am trying to achieve here is to display a number of LinkButtons in a RadGrid Column. The buttons are generated from a collection property member of the bound grid row item. The CollectionLinkButton control is nothing more than a asp:Panel derived control that populates its Child Controls from "DataItem.SomeCollection" and this is working fine. The problem I am facing is with this part: Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %' This is because databound Collection Property is populated so late in the lifecycle of the page that the LinkButton Controls that the CollectionLinkButton class creates from the collection are not available yet during Postback when the Click event Handler is supposed to fire and I have currently no idea how to solve this problem. <radG:RadGrid ID="grid" runat="server" DataSourceID="ds_AB"> <MasterTableView> <Columns> <radG:GridTemplateColumn> <ItemTemplate> <local:CollectionLinkButton ID="LinkButton1" runat="server" CssClass="EntityLinkButton" Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %>' CollectionProperty="Id" CollectionDisplayProperty="Name" Text='<%# DataBinder.Eval(Container, "DataItem.Name") %>'</local:CollectionLinkButton> </ItemTemplate> </radG:GridTemplateColumn>

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >