Search Results

Search found 24675 results on 987 pages for 'table'.

Page 211/987 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following mysql InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • SQL Operators as text in where clause

    - by suggy1982
    I have the following table, which is used for storing bandings. The table is maintained via a web frontend. CREATE TABLE [dbo].[Banding]( [BandingID] [int] IDENTITY(1,1) NOT NULL, [ValueLowerLimitOperator] [varchar](10) NULL, [ValueLowerLimit] [decimal](9, 2) NULL, [ValueUpperLimitOperator] [varchar](10) NULL, [ValueUpperLimit] [decimal](9, 2) NULL, [VolumeLowerLimitOperator] [varchar](10) NULL The operator fields store values such as < = <=. I want to get to a position where I can use the operators values stored in the table in a case statement in a where clause. Like this. SELECT * FROM table WHERE CASE ValueLowerLimitOperator WHEN '<' THEN VALUE < X WHEN '>' THEN VALUE > X END rather than having to write mutiple case or if statements for each permutation. Does anyone have any suggestions how I can decode the operators values stored in the table as part of my query and then use them in a case/where statement?

    Read the article

  • Unable to relate two MySQL tables (foreign keys)

    - by KPL
    Hello people, Here's my USER table CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(100) NOT NULL, `expiry` varchar(6) NOT NULL, `contact_id` int(11) NOT NULL, `email` varchar(255) NOT NULL, `password` varchar(100) NOT NULL, `level` int(3) NOT NULL, `active` tinyint(4) NOT NULL DEFAULT '1', PRIMARY KEY (`id`,`email`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; And here's my contact_info table CREATE TABLE IF NOT EXISTS `contact_info` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `email_address` varchar(255) NOT NULL, `company_name` varchar(255) NOT NULL, `license_number` varchar(255) NOT NULL, `phone` varchar(30) NOT NULL, `fax` varchar(30) NOT NULL, `mobile` varchar(30) NOT NULL, `category` varchar(100) NOT NULL, `country` varchar(20) NOT NULL, `state` varchar(20) NOT NULL, `city` varchar(100) NOT NULL, `postcode` varchar(50) NOT NULL, PRIMARY KEY (`id`,`email_address`), ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; The system uses username to login users.I want to modify it in such a way that it uses email for login. But there's no email_address in users table. I have added foreign key - email in user table(which is email_address in contact_info). How should I query database?

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • sql stored procedure in visual studio 2008

    - by Greg
    Hi, I want to write stored procedure in visual studio that as a parameter recieves the name of project and runs in database TT and copies data from TT.dbo.LCTemp (where the LC is the name of the project recieved as a parameter) table to "TT.dbo.Points" table. both tables have 3 columns: PT_ID, Projectname and DateCreated I think I have written it wrong, here it is: ALTER PROCEDURE dbo.FromTmpToRegular @project varchar(10) AS BEGIN declare @ptID varchar(20) declare @table varchar(20) set @table = 'TT.dbo.' + @project + 'Temp' set @ptID = @table + '.PT_ID' Insert into TT.dbo.Points Select * from [@table] where [@ptID] Not in(Select PT_ID from TT.dbo.Points) END Any idea what I did wrong? Thanks! :) Greg

    Read the article

  • tablednd post issue help please

    - by netrise
    Hi plz i got a terrible headache my script is very simple Why i can’t get $_POST['table-2'] after submiting update button, i want to get ID numbers sorted # index.php <head> <script src="jquery.js" type="text/javascript"></script><br /> <script src="jquery.tablednd.js" type="text/javascript"></script><br /> <script src="jqueryTableDnDArticle.js" type="text/javascript"></script><br /> </head> <body> <form method='POST' action=index.php> <table id="table-2" cellspacing="0" cellpadding="2"> <tr id="a"><td>1</td><td>One</td><td><input type="text" name="one" value="one"/></td></tr> <tr id="b"><td>2</td><td>Two</td><td><input type="text" name="two" value="two"/></td></tr> <tr id="c"><td>3</td><td>Three</td><td><input type="text" name="three" value="three"/></td></tr> <tr id="d"><td>4</td><td>Four</td><td><input type="text" name="four" value="four"/></td></tr> <tr id="e"><td>5</td><td>Five</td><td><input type="text" name="five" value="five"/></td></tr> </table> <input type="submit" name="update" value="Update"> </form> <?php $result[] = $_POST['table-2']; foreach($result as $value) { echo "$value<br/>"; } ?> </body> # jqueryTableDnDArticle.js …………. $(“#table-2?).tableDnD({ onDragClass: “myDragClass”, onDrop: function(table, row) { var rows = table.tBodies[0].rows; var debugStr = “Row dropped was “+row.id+”. New order: “; for (var i=0; i<rows.length; i++) { debugStr += rows[i].id+" "; } //$("#debugArea").html(debugStr); $.ajax({ type: "POST", url: "index.php", data: $.tableDnD.serialize(), success: function(html){ alert("Success"); } }); }, onDragStart: function(table, row) { $("#debugArea").html("Started dragging row "+row.id); } });

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • Counting characters in an Access database column using SQL

    - by jzr
    I have the following table col1 col2 col3 col4 ==== ==== ==== ===== 1233 4566 ABCD CDEF 1233 4566 ACD1 CDEF 1233 4566 D1AF CDEF I need to count the characters in col3, so from the data in the previous table it would be: char count ==== ===== A 3 B 1 C 2 D 3 F 1 1 2 Is this possible to achieve by using SQL only? At the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that. This is my query at the moment: PARAMETERS X Long; SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount FROM TEST GROUP BY Mid(TABLE.col3,X,1) HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1)); Ideas and help are much appreciated, as I don't usually work with Access and SQL.

    Read the article

  • How to create updateable panel using AjaxLink of Wicket?

    - by Arne
    I try to realize a link (in fact many links) that update a table in an website using an AjaxLink of Wicket. But I fail, the table is never updated (I have "setOutputMarkupId(true)" and call "addComponent", but there must be something other thats wrong). How can I realize a panel with a number of links and a table that displays dynamic data, dependent on the link clicked? Can someone give an example (Maybe two links, when the first one is clicked the table displays two random numbers from 1-10, when the second is clicked the table displays random numbers from 1-100)? Without reloading the entire page, but only the html for the table?

    Read the article

  • Dynamic Like Statement in SQL

    - by Peter McElhinney
    Hey there! I've been racking my brain on how to do this for a while, and i know that some genius on this site will have the answer. Basically i'm trying to do this: SELECT column FROM table WHERE [table][column] LIKE string1 OR [table][column] LIKE string2 OR [table][column] LIKE string3... for a list of search strings stored in a column of a table. Obviously I can't do a like statement for each string by hand because i want the table to be dynamic. Any suggestions would be great. :D EDIT: I'm using MSSQL :(

    Read the article

  • InvalidOperationException sequence contains more than one element even when only one element

    - by user310256
    I have three tables, tblCompany table, tblParts table and a link table between them tblLinkCompanyParts. Since tblLinkCompanyParts is a link table so the columns that it has are LinkCompanyPartID(primary key), CompanyID from tblCompany table and PartID from tblParts as foreign keys. I have tied them up in the dbml file. In code if I write LinkCompanyParts.Parts (where LinkCompanyParts is an object of the tblLinkCompanyParts type) to get to the corresponding Part object I get the "InvalidOperationException: Sequence constains more than one element". I have looked at the data in the database and there is only one Parts record associated with the LinkCompanyPartID. The stack trace reads like at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at System.Data.Linq.EntityRef`1.get_Entity() at ... I read about SingleOrDefault vs FirstOrDefault but since the link table should have a one-one mapping therefore I think SingleOrDefault should work and besides "SingleOrDefault" statement is being generated behind the scenes in the designer.cs file at the following line return this._Part.Entity; Any ideas?

    Read the article

  • MySQL InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following MySQL InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • How can I combine xsl:attribute and xsl:use-attribute-sets to conditionally use an attribute set?

    - by Peter
    We have an xml node "item" with an attribute "style", which is "Header1". This style can change however. We have an attribute set named Header1 which defines how this should look in a PDF, generated through xsl:fo. This works (the use-attribute-sets is mentioned inline, in the fo:table-cell node): <xsl:template match="item[@type='label']"> <fo:table-row> <fo:table-cell xsl:use-attribute-sets="Header1"> <fo:block> <fo:inline font-size="8pt" > <xsl:value-of select="." /> </fo:inline> </fo:block> </fo:table-cell> </fo:table-row> </xsl:template> But this doesn't (using xsl:attribute, because the attribute @style can also be Header2 for example). It doesn't generate an error, the PDF is created, but the attributes aren't applied. <xsl:template match="item[@type='label']"> <fo:table-row> <fo:table-cell> <xsl:attribute name="xsl:use-attribute-sets"> <xsl:value-of select="@style" /> </xsl:attribute> <fo:block> <fo:inline font-size="8pt" > <xsl:value-of select="." /> </fo:inline> </fo:block> </fo:table-cell> </fo:table-row> </xsl:template> Does anyone know why? And how we could achieve this, preferably without long xsl:if or xsl:when stuff?

    Read the article

  • MySQL Count If using 4 tables or Perl

    - by user1726133
    Hi I have a relatively convoluted query that relies on 4 different tables, unfortunately I do not have control of this data, but I do have to query it. I ran this simpler query and it works using just table 1 and table 2 SELECT actor, receiver, count(IF(t2.group1 = "anxiety behavior", 1,0)) AS 'anxiety' FROM ethogram_edited_obs_behaviors t1 JOIN ethogram_behaviors t2 on t1.behavior = t2.behavior_code GROUP BY actor; Below are the 4 tables I need and the query I tried that didn't work Table 1 | Table 2 | Table 3 | Table 4 Actor | Behavior | Behavior | type of Behavior | subject | sex | subject |subject_code er frown | frown anxiety behavior | Eric M | Eric | er Here is the query that is failing SELECT actor, count(IF(t2.group1 = "anxiety behavior", 1,0) AND(t3.sex = "M", 1,0)) AS 'anxiety', FROM ethogram_edited_obs_behaviors t1 JOIN ethogram_behaviors t2 on t1.behavior = t2.behavior_code JOIN subject_code t3 on t1.actor = t3.behavior_code1 JOIN subjects t4 on t3.subject = t4.yerkes_code GROUP BY actor; Any help would be much appreciated!! Thanks :) P.S. if this is easier to do in Perl tips also much appreciated

    Read the article

  • Add a background image to tables in a navigation view in a tabbed application

    - by padatronic
    Hi people, I have a tabbed application which then has a navigation controller. In each the navigation controller I an a table View Controller which obviously contains a table. I push a new table view controller from this one. I want to put a background to the table but can only add an image infront of the table. I think this may be because I am adding the sub view in the table view controller, but I don't seem to be able to add it anywhere else. Please could you help me, thanks.

    Read the article

  • SQL Server Delete - Froregin Key

    - by Ahmet Altun
    I have got two tables in Sql Server 2005: USER Table: information about user and so on. COUNTRY Table : Holds list of whole countries on the world. USER_COUNTRY Table: Which matches, which user has visited which county. It holds, UserID and CountryID. For example, USER_COUNTRY table looks like this: ID -- UserID -- CountryID 1 -- 1 -- 34 2 -- 1 -- 5 3 -- 2 -- 17 4 -- 2 -- 12 5 -- 2 -- 21 6 -- 3 -- 19 My question is that: When a user is deleted in USER table, how can I make associated records in USER_COUNTRY table deleted directly. Maybe, by using Foreign Key Constaint?

    Read the article

  • Nested Execution Flow Control

    - by chris
    I've read tens of answers related to callbacks, promises and other ways to control flow, but I can't still wrap my head around this task, obviously due to my lack of competence. I have a nested problem: In test_1() (and the other functions) I would like to ensure that the rows are added to the table according to the order in which the elements are in the object; I would like to execute either test_2 or test_3 (or both after each other) only after test_1 has finished completely. Actually the right sequence will only be known at runtime (there will be a switch with the possible sequences, like 1,2,3 or 1,3,2 or 1,2,1,3 or 1,3,3,2, etc...) Code: $(function () { // create table tbl = document.createElement('table'); tbl.className = "mainTbl"; $("body").append(tbl); }); function test_1() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } function test_2() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } function test_3() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } I know that calling the functions in sequence doesn't work as they don't wait for each other... I think promises are they way to go but I can't find the right combination and the documentation is way too complex for my skills. What's the best way to structure the code so that it's executed in the right order?

    Read the article

  • Nhibernate Complex Type binding

    - by user329983
    I have two oracle user defined types: Audit_Type – A normal object with two fields a string and a number Audit_Table_Type – A table of audit_types, (an array) I have a stored procedure that takes as a parameter an Audit_Table_Type. List<Audit_Type> table = new List<Audit_Type>(); var query = session.CreateSQLQuery("call Audit_Rows(Audit_Table_Type(:table))") .SetParameterList("table", table, NHibernateUtil.Custom(typeof(AuditTypeUDT))) This is what I did intuativly created the ICompositeType and just set in a list of them in but this gives me nothing close to what I wanted. I couldn’t figure out how to bind to a table at all. I have built the inline sql that would do this for me but it would destroy my shared pool (not using binds). So a General question how do I bind to complex/composite types using Nhibernate?

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • concatenate multi values in one record without duplication

    - by mikehjun
    I have a dbf table like below which is the result of one to many join from two tables. I want to have unique zone values from one Taxlot id field. table name: input table tid ----- zone 1 ------ A 1 ------ A 1 ------ B 1 ------ C 2 ------ D 2 ------ E 3 ------ C Desirable output table table name: input table tid ----- zone 1 ------ A, B, C 2 ------ D, E 3 ------ C I got some help but couldn't make it to work. inputTbl = r"C:\temp\input.dbf" taxIdZoningDict = {} searchRows = gp.searchcursor(inputTbl) searchRow = searchRows.next() while searchRow: if searchRow.TID in taxIdZoningDict: taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) else: taxIdZoningDict[searchRow.TID] = set() #a set prevents dulpicates! taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) searchRow = searchRows.next() outputTbl = r"C:\temp\output.dbf" gp.CreateTable_management(r"C:\temp", "output.dbf") gp.AddField_management(outputTbl, "TID", "LONG") gp.AddField_management(outputTbl, "ZONES", "TEXT", "", "", "20") tidList = taxIdZoningDict.keys() tidList.sort() #sorts in ascending order insertRows = gp.insertcursor(outputTbl) for tid in tidList: concatString = "" for zone in taxIdZoningDict[tid] concatString = concatString + zone + "," insertRow = insertRows.newrow() insertRow.TID = tid insertRow.ZONES = concatString[:-1] insertRows.insertrow(insertRow) del insertRow del insertRows

    Read the article

  • How to select the specific cell in Jtable?

    - by Nitz
    Hey guys Now in my one jframe i have used two jtables with different data. But now what i want is..as soon as user select one cell on first table, then in second table, the cell next to selected cell of the first table should be selected. means.... if user had selected the cell which is on column - 3 and row - 1 [ first table cell position ] then next table should automatically select the cell at column - 4 and row - 1 [ second table cell position ] i know how to get the selected row and column of jtable but i don't know how to set Select on jtable.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >