Search Results

Search found 24705 results on 989 pages for 'tally table'.

Page 211/989 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Comparing 2 mysql tables and updating only records that have changed

    - by Roland
    I have the following. Two mysql tables. I want to copy info that has changed from table a to b. For example if row 1 column 2 has changed in table a I want to only update that column in table b. Table b is not the same as a but has same columns that also exist in a. The other solution I have is to just clear table b and replace it with the contents from table a, the problem with this could be that the script will take longer to execute, since there are more than 10000 records. Any advise for which method would work the best will be highly appreciated

    Read the article

  • How to force grails GORM to respect DB scheme ?

    - by fabien-barbier
    I have two domains : class CodeSet { String id String owner String comments String geneRLF String systemAPF static hasMany = [cartridges:Cartridge] static constraints = { id(unique:true,blank:false) } static mapping = { table 'code_set' version false columns { id column:'code_set_id', generator: 'assigned' owner column:'owner' comments column:'comments' geneRLF column:'gene_rlf' systemAPF column:'system_apf' } } and : class Cartridge { String id String code_set_id Date runDate static belongsTo = CodeSet static constraints = { id(unique:true,blank:false) } static mapping = { table 'cartridge' version false columns { id column:'cartridge_id', generator: 'assigned' code_set_id column:'code_set_id' runDate column:'run_date' } } Actually, with those models, I get tables : - code_set, - cartridge, - and table : code_set_cartridge (two fields : code_set_cartridges_id, cartridge_id) I would like to not have code_set_cartridge table, but keep relationship : code_set -- 1:n -- cartridge In other words, how can I keep association between code_set and cartridge without intermediate table ? (using code_set_id as primary key in code_set and code_set_id as foreign key in cartridge). Mapping with GORM can be done without intermediate table?

    Read the article

  • SQL Server Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on SQL Server. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have hundreds of thousands of tables in your SQL Server? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • If you delete a DOM element, do any events that started with that element continue to bubble?

    - by Matt
    What behavior should I expect if I delete a DOM element that was used to start an event bubble, or whose child started the event bubble - will it continue to bubble if the element is removed? For example - lets say you have a table, and want to detect click events on the table cells. Another piece of JS has executed an AJAX request that will eventually replace the table, in full, once the request is complete. What happens if I click the table, and immediately after the table gets replaced by a successful completion of an AJAX request? I ask because I am seeing some behavior where the click events don't seem to be bubbling - but it is hard to duplicate. I am watching the event on a parent element of the table (instead of attaching the event to every TD), and it just doesn't seem to reach it sometimes.

    Read the article

  • sql server datetime

    - by cvista
    hey i have the following query: select * from table where table.DateUpdated >='2010-05-03 08:31:13:000' all the rows in the table being queried have the following DateUpdated: 2010-05-03 08:04:50.000 it returns all of the rows in the table - even though it should return none. I am pretty sure this is because of some crappy date/time regional thing. if i swap the date to be select * from table where table.DateUpdated >='2010-03-05 08:31:13:000' then it does as it should. How can i force everything to be using the same settings? this is doing my head in :) This is sql generated by NHIbernate from my WCF service if that matters. w://

    Read the article

  • JTable Boolean cell .. how to read it

    - by dimitar
    Hello guys, i have a table with a column that contains Boolean type. now when i try to run a program i have a problem: System.out.println(table.getColumnClass(5)); b= (Boolean)table.getValueAt(row, 5); It prints class java.lang.Boolean but also: Exception in thread "AWT-EventQueue-0" java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Boolean and i even try with b= (Boolean)table.getValueAt(row, 5); b= Boolean.parseBoolean((String)table.getValueAt(row, 5)); b= table.getValueAt(row, 5); but those show error too. So how to fix this guys?

    Read the article

  • T-SQL: Build Nested Set From Parent-Child Relationship

    - by Peder Rice
    I have a table that stores my Customer hierarchy with a nested set (due to the specific design of the application, I wasn't able to leverage just a Customer/Parent Customer mapping table). To simplify maintenance of this table, I've built a couple of stored procedures to handle moving nodes around and creating new nodes, but it's significantly more work than maintaining a Customer/Parent Customer table. Further, these structures are very fragile. So I'm looking for a way to have a Customer/Parent Customer table and then convert that table to a nested set on demand. Does anyone have a link to such an implementation?

    Read the article

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following mysql InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • SQL Operators as text in where clause

    - by suggy1982
    I have the following table, which is used for storing bandings. The table is maintained via a web frontend. CREATE TABLE [dbo].[Banding]( [BandingID] [int] IDENTITY(1,1) NOT NULL, [ValueLowerLimitOperator] [varchar](10) NULL, [ValueLowerLimit] [decimal](9, 2) NULL, [ValueUpperLimitOperator] [varchar](10) NULL, [ValueUpperLimit] [decimal](9, 2) NULL, [VolumeLowerLimitOperator] [varchar](10) NULL The operator fields store values such as < = <=. I want to get to a position where I can use the operators values stored in the table in a case statement in a where clause. Like this. SELECT * FROM table WHERE CASE ValueLowerLimitOperator WHEN '<' THEN VALUE < X WHEN '>' THEN VALUE > X END rather than having to write mutiple case or if statements for each permutation. Does anyone have any suggestions how I can decode the operators values stored in the table as part of my query and then use them in a case/where statement?

    Read the article

  • Unable to relate two MySQL tables (foreign keys)

    - by KPL
    Hello people, Here's my USER table CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(100) NOT NULL, `expiry` varchar(6) NOT NULL, `contact_id` int(11) NOT NULL, `email` varchar(255) NOT NULL, `password` varchar(100) NOT NULL, `level` int(3) NOT NULL, `active` tinyint(4) NOT NULL DEFAULT '1', PRIMARY KEY (`id`,`email`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; And here's my contact_info table CREATE TABLE IF NOT EXISTS `contact_info` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `email_address` varchar(255) NOT NULL, `company_name` varchar(255) NOT NULL, `license_number` varchar(255) NOT NULL, `phone` varchar(30) NOT NULL, `fax` varchar(30) NOT NULL, `mobile` varchar(30) NOT NULL, `category` varchar(100) NOT NULL, `country` varchar(20) NOT NULL, `state` varchar(20) NOT NULL, `city` varchar(100) NOT NULL, `postcode` varchar(50) NOT NULL, PRIMARY KEY (`id`,`email_address`), ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; The system uses username to login users.I want to modify it in such a way that it uses email for login. But there's no email_address in users table. I have added foreign key - email in user table(which is email_address in contact_info). How should I query database?

    Read the article

  • tablednd post issue help please

    - by netrise
    Hi plz i got a terrible headache my script is very simple Why i can’t get $_POST['table-2'] after submiting update button, i want to get ID numbers sorted # index.php <head> <script src="jquery.js" type="text/javascript"></script><br /> <script src="jquery.tablednd.js" type="text/javascript"></script><br /> <script src="jqueryTableDnDArticle.js" type="text/javascript"></script><br /> </head> <body> <form method='POST' action=index.php> <table id="table-2" cellspacing="0" cellpadding="2"> <tr id="a"><td>1</td><td>One</td><td><input type="text" name="one" value="one"/></td></tr> <tr id="b"><td>2</td><td>Two</td><td><input type="text" name="two" value="two"/></td></tr> <tr id="c"><td>3</td><td>Three</td><td><input type="text" name="three" value="three"/></td></tr> <tr id="d"><td>4</td><td>Four</td><td><input type="text" name="four" value="four"/></td></tr> <tr id="e"><td>5</td><td>Five</td><td><input type="text" name="five" value="five"/></td></tr> </table> <input type="submit" name="update" value="Update"> </form> <?php $result[] = $_POST['table-2']; foreach($result as $value) { echo "$value<br/>"; } ?> </body> # jqueryTableDnDArticle.js …………. $(“#table-2?).tableDnD({ onDragClass: “myDragClass”, onDrop: function(table, row) { var rows = table.tBodies[0].rows; var debugStr = “Row dropped was “+row.id+”. New order: “; for (var i=0; i<rows.length; i++) { debugStr += rows[i].id+" "; } //$("#debugArea").html(debugStr); $.ajax({ type: "POST", url: "index.php", data: $.tableDnD.serialize(), success: function(html){ alert("Success"); } }); }, onDragStart: function(table, row) { $("#debugArea").html("Started dragging row "+row.id); } });

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • sql stored procedure in visual studio 2008

    - by Greg
    Hi, I want to write stored procedure in visual studio that as a parameter recieves the name of project and runs in database TT and copies data from TT.dbo.LCTemp (where the LC is the name of the project recieved as a parameter) table to "TT.dbo.Points" table. both tables have 3 columns: PT_ID, Projectname and DateCreated I think I have written it wrong, here it is: ALTER PROCEDURE dbo.FromTmpToRegular @project varchar(10) AS BEGIN declare @ptID varchar(20) declare @table varchar(20) set @table = 'TT.dbo.' + @project + 'Temp' set @ptID = @table + '.PT_ID' Insert into TT.dbo.Points Select * from [@table] where [@ptID] Not in(Select PT_ID from TT.dbo.Points) END Any idea what I did wrong? Thanks! :) Greg

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • How to create updateable panel using AjaxLink of Wicket?

    - by Arne
    I try to realize a link (in fact many links) that update a table in an website using an AjaxLink of Wicket. But I fail, the table is never updated (I have "setOutputMarkupId(true)" and call "addComponent", but there must be something other thats wrong). How can I realize a panel with a number of links and a table that displays dynamic data, dependent on the link clicked? Can someone give an example (Maybe two links, when the first one is clicked the table displays two random numbers from 1-10, when the second is clicked the table displays random numbers from 1-100)? Without reloading the entire page, but only the html for the table?

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • Counting characters in an Access database column using SQL

    - by jzr
    I have the following table col1 col2 col3 col4 ==== ==== ==== ===== 1233 4566 ABCD CDEF 1233 4566 ACD1 CDEF 1233 4566 D1AF CDEF I need to count the characters in col3, so from the data in the previous table it would be: char count ==== ===== A 3 B 1 C 2 D 3 F 1 1 2 Is this possible to achieve by using SQL only? At the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that. This is my query at the moment: PARAMETERS X Long; SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount FROM TEST GROUP BY Mid(TABLE.col3,X,1) HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1)); Ideas and help are much appreciated, as I don't usually work with Access and SQL.

    Read the article

  • Dynamic Like Statement in SQL

    - by Peter McElhinney
    Hey there! I've been racking my brain on how to do this for a while, and i know that some genius on this site will have the answer. Basically i'm trying to do this: SELECT column FROM table WHERE [table][column] LIKE string1 OR [table][column] LIKE string2 OR [table][column] LIKE string3... for a list of search strings stored in a column of a table. Obviously I can't do a like statement for each string by hand because i want the table to be dynamic. Any suggestions would be great. :D EDIT: I'm using MSSQL :(

    Read the article

  • InvalidOperationException sequence contains more than one element even when only one element

    - by user310256
    I have three tables, tblCompany table, tblParts table and a link table between them tblLinkCompanyParts. Since tblLinkCompanyParts is a link table so the columns that it has are LinkCompanyPartID(primary key), CompanyID from tblCompany table and PartID from tblParts as foreign keys. I have tied them up in the dbml file. In code if I write LinkCompanyParts.Parts (where LinkCompanyParts is an object of the tblLinkCompanyParts type) to get to the corresponding Part object I get the "InvalidOperationException: Sequence constains more than one element". I have looked at the data in the database and there is only one Parts record associated with the LinkCompanyPartID. The stack trace reads like at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at System.Data.Linq.EntityRef`1.get_Entity() at ... I read about SingleOrDefault vs FirstOrDefault but since the link table should have a one-one mapping therefore I think SingleOrDefault should work and besides "SingleOrDefault" statement is being generated behind the scenes in the designer.cs file at the following line return this._Part.Entity; Any ideas?

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • MySQL InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following MySQL InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >