Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 317/1022 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Full Text Index type column is empty

    - by RemotecUk
    I am trying to create an index on a VarBinary(max) field in my SQL Server 2008 database. The steps I am taking are as follows: Table: dbo.Records Right click on table and select "Full Text Index" Then select "Define Index..." I choose the primary key which is the PK of my table (field name Id, type UniqueIndentifier). I then get the screen with the options Available Columns, Language for Word Breaker and Type Column I select my VarBinary(max) field called Chart as the Available Column by ticking the box. I select "English" as the Language for Word Breaker field. Then... I try to select the Type Column but there are no entries in here. I cannot proceed by clicking "Next" until this column is populated. Why are there no entries in this column for selection and what should be in there? Note 1: The VarBinary(max) field is linked to a file group if that makes any difference. Note 2: Also noticed that in the table designer I cannot set the full text option on that same field to "Yes" - its permanently stuck on "No". Thanks.

    Read the article

  • Copy whole SQL Server database into JSON from Python

    - by Oli
    I facing an atypical conversion problem. About a decade ago I coded up a large site in ASP. Over the years this turned into ASP.NET but kept the same database. I've just re-done the site in Django and I've copied all the core data but before I cancel my account with the host, I need to make sure I've got a long-term backup of the data so if it turns out I'm missing something, I can copy it from a local copy. To complicate matters, I no longer have Windows. I moved to Ubuntu on all my machines some time back. I could ask the host to send me a backup but having no access to a machine with MSSQL, I wouldn't be able to use that if I needed to. So I'm looking for something that does: db = {} for table in database: db[table.name] = [row for row in table] And then I could serialize db off somewhere for later consumption... But how do I do the table iteration? Is there an easier way to do all of this? Can MSSQL do a cross-platform SQLDump (inc data)? For previous MSSQL I've used pymssql but I don't know how to iterate the tables and copy rows (ideally with column headers so I can tell what the data is). I'm not looking for much code but I need a poke in the right direction.

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • NHibernate update using composite key

    - by Mahesh
    Hi, I have a table defnition as given below: License ClientId Type Total Used ClientId and Type together uniquely identifies a row. I have a mapping file as given below: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="Acumen.AAM.Domain.Model.License, Acumen.AAM.Domain" lazy="false" table="License"> <id name="ClientId" access="field" column="ClientID" /> <property name="Total" access="field" column="Total"/> <property name="Used" access="field" column="Used"/> <property name="Type" access="field" column="Type"/> </class> </hibernate-mapping> If a client used a license to create a user, I need to update the Used column in the table. As I set ClientId as the id column for this table, I am getting TooManyRowsAffectedException. could you please let me know how to set a composite key at mapping level so that NHibernate can udpate based on ClientId and Type. Something like: Update License SET Used=Used-1 WHERE ClientId='xxx' AND Type=1 Please help. Thanks, Mahesh

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • GridView: How can I get rid of extra space from my GirdView object?

    - by Lajos Arpad
    Hello, I'm writing an application for Android phones for Human vs. Human chess play over the internet. I was looking at some tutorials, to learn how to develop Android applications and found a very nice example of making galleries (it was a GridView usage example for making a gallery about dogs) and the idea came to draw the chess table using a GridView, because the example project also handled the point & click event and I intended to use the same event in the same way, but for a different purpose. The game works well (currently it's a hotseat version), however, I'm really frustrated by the fact that whenever I rotate the screen of the phone, my GridView gets hysterical and puts some empty space in my chess table between the columns. I realized that the cause of this is that the GridView's width is the same as its parent's and the GridView tries to fill its parent in with, but there should (and probably is) be a simple solution to get rid of this problem. However, after a full day of researching, I haven't found any clue to help me to make a perfect drawing about my chess table without a negative side effect in functionality. The chess table looks fine if the phone is in Portrait mode, but in Landscape mode it's far from nice. This is how I can decide whether we are in Portrait or Landscape mode: ((((MainActivity)mContext).getWindow().getWindowManager().getDefaultDisplay().getWidth()) < ((MainActivity)mContext).getWindow().getWindowManager().getDefaultDisplay().getHeight()) In the main.xml file the GridView is defined in the following way: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <GridView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gridview" android:layout_width="wrap_content" android:layout_height="wrap_content" android:numColumns="8" android:verticalSpacing="0dp" android:horizontalSpacing="0dp" android:stretchMode="columnWidth" android:gravity="center" > </GridView> ... </LinearLayout> I appreciate any help with the problem and thank you for reading this.

    Read the article

  • I am using relational division with EAV, but I need to find results in EAV that have some of the cat

    - by NewToDB
    I have two tables: CREATE TABLE EAV ( subscriber_id INT(1) NOT NULL DEFAULT '0', attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '', PRIMARY KEY (subscriber_id,attribute_id) ) INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'garment','shirt') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'garment','pants') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (3,'garment','pants') CREATE TABLE CRITERIA ( attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '' ) INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('color', 'red') INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('size', 'xl') To find all subscribers in the EAV that match my criteria, I use relational division: SELECT DISTINCT(subscriber_id) FROM EAV WHERE subscriber_id IN (SELECT E.subscriber_id FROM EAV AS E JOIN CRITERIA AS CR ON E.attribute_id = CR.attribute_id AND E.attribute_value = CR.attribute_value GROUP BY E.subscriber_id HAVING COUNT() = (SELECT COUNT() FROM CRITERIA)) This gives me an unique list of subscribers who have all the criteria. So that means I get back subscriber 1 and 2 since they are looking for the color red and size xl, and that's exactly my criteria. But what if I want to extend this so that I also get subscriber 3 since this subscriber didn't specifically say what color or size they want (ie. there is no entry for attribute 'color' or 'size' in the EAV table for subscriber 3). Given my current design, is there a way I can extend my query to include subscribers that have zero or more of the attributes defined, and if they do have the attribute defined, then it must match the criteria? Or is there a better way to design the table to aid in querying?

    Read the article

  • MySql Query lag time / deadlock?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • t:commandSortHeader not being styled

    - by JBristow
    I've got the following JSF: <t:column styleClass="cb-status-column"> <f:facet name="header"> <t:commandSortHeader columnName="externalStatus" styleClass="cb-status-column"> Status </t:commandSortHeader> </f:facet> <h:outputText value="#{tx.externalStatus}" title="#{tx.externalStatus}"/> </t:column> Unfortunately, it renders like this: <th> <a href="#" onclick="return oamSubmitForm('transactionsTableForm','cb-activity-table:j_id131');" id="cb-activity-table:j_id131"> Status </a> </th> How do I get it to look like this: <th class="cb-status-column"> <a href="#" onclick="return oamSubmitForm('transactionsTableForm','cb-activity-table:j_id131');" id="cb-activity-table:j_id131"> Status </a> </th>

    Read the article

  • nested insert exec work around

    - by stackoverflowuser
    i have 2 stored procedures usp_SP1 and usp_SP2. Both of them make use of insert into #tt exec sp_somesp. I wanted to created a 3rd stored procedure which will decide which stored proc to call. something like create proc usp_Decision ( @value int ) as begin if (@value = 1) exec usp_SP1 -- this proc already has insert into #tt exec usp_somestoredproc else exec usp_SP2 -- this proc too has insert into #tt exec usp_somestoredproc end Later realized I needed some structure defined for the return value from usp_Decision so that i can populate the SSRS dataset field. So here is what i tried: within usp_Decision created a temp table and tried to do "insert into #tt exec usp_SP1". Didn't work out. error "insert exec cannot be nested" within usp_Decision tried passing table variable to each of the stored proc and update the table within the stored procs and do "select * from ". That didnt work out as well. Table variable passed as parameter cannot be modified within the stored proc. Pls. suggest what can de done?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • How can I extend this SQL query to find the k nearest neighbors?

    - by Smigs
    I have a database full of two-dimensional data - points on a map. Each record has a field of the geometry type. What I need to be able to do is pass a point to a stored procedure which returns the k nearest points (k would also be passed to the sproc, but that's easy). I've found a query at http://blogs.msdn.com/isaac/archive/2008/10/23/nearest-neighbors.aspx which gets the single nearest neighbour, but I can't figure how to extend it to find the k nearest neighbours. This is the current query - T is the table, g is the geometry field, @x is the point to search around, Numbers is a table with integers 1 to n: DECLARE @start FLOAT = 1000; WITH NearestPoints AS ( SELECT TOP(1) WITH TIES *, T.g.STDistance(@x) AS dist FROM Numbers JOIN T WITH(INDEX(spatial_index)) ON T.g.STDistance(@x) < @start*POWER(2,Numbers.n) ORDER BY n ) SELECT TOP(1) * FROM NearestPoints ORDER BY n, dist The inner query selects the nearest non-empty region and the outer query then selects the top result from that region; the outer query can easily be changed to (e.g.) SELECT TOP(20), but if the nearest region only contains one result, you're stuck with that. I figure I probably need to recursively search for the first region containing k records, but without using a table variable (which would cause maintenance problems as you have to create the table structure and it's liable to change - there're lots of fields), I can't see how.

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • Applying the Hibernate filter attribute to a Bag with a many-to-many relationship

    - by David P
    Consider the following Hibernate mapping file: <hibernate-mapping ...> <class name="ContentPackage" table="contentPackages"> <id name="Id" column="id" type="int"><generator class="native" /></id> ... <bag name="Clips" table="contentAudVidLinks"> <key column="fk_contentPackageId"></key> <many-to-many class="Clip" column="fk_AudVidId"></many-to-many> <filter name="effectiveDate" condition=":asOfDate BETWEEN startDate and endDate" /> </bag> </class> </hibernate-mapping> When I run the following command: _session.EnableFilter("effectiveDate").SetParameter("asOfDate", DateTime.Today); IList<ContentPackage> items = _session.CreateCriteria(typeof(ContentPackage)) .Add(Restrictions.Eq("Id", id)) .List<ContentPackage>(); The resulting SQL has the WHERE clause on the intermediate mapping table (contentAudVidLinks), rather than the "Clips" table even though I have added the filter attribute to the Bag of Clips. What am I doing wrong?

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • Modifying my website to allow anonymous comments

    - by David
    I write the code for my own website as an educational/fun exercise. Right now part of the website is a blog (like every other site out there :-/) which supports the usual basic blog features, including commenting on posts. But I only have comments enabled for logged-in users; I want to alter the code to allow anonymous comments - that is, I want to allow people to post comments without first creating a user account on my site, although there will still be some sort of authentication involved to prevent spam. Question: what information should I save for anonymous comments? I'm thinking at least display name and email address (for displaying a Gravatar), and probably website URL because I eventually want to accept OpenID as well, but would anything else make sense? Other question: how should I modify the database to store this information? The schema I have for the comment table is currently comment_id smallint(5) // The unique comment ID post_id smallint(5) // The ID of the post the comment was made on user_id smallint(5) // The ID of the user account who made the comment comment_subject varchar(128) comment_date timestamp comment_text text Should I add additional fields for name, email address, etc. to the comment table? (seems like a bad idea) Create a new "anonymous users" table? (and if so, how to keep anonymous user ids from conflicting with regular user ids) Or create fake user accounts for anonymous users in my existing users table? Part of what's making this tricky is that if someone tries to post an anonymous comment using an email address (or OpenID) that's already associated with an account on my site, I'd like to catch that and prompt them to log in.

    Read the article

  • Trying to Use LoadMoreElement in Monotouch.Dialog

    - by user1487581
    I am using Monotouch to write an Ipad app. The app uses tables to browse down through a directory tree and then select a file. I have used Monotouch.Dialog to browse the directories and I set up the directory tables as the app starts.However there are too many files to set up in a table as the app starts and so I want to set up the 'file table' as the file is selected from the lowest level directory table. I am trying to use LoadMoreElement to do this but I cannot make it work or find any examples online. I have coded the 'Elements API Walkthrough' in the Xamarin tutorial at:- http://docs.xamarin.com/ios/tutorials/MonoTouch.Dialog I then add a new section to the code:- _addButton.Clicked += (sender, e) => { ++n; var task = new Task{Name = "task " + n, DueDate = DateTime.Now}; var taskElement = new RootElement (task.Name){ new Section () { new EntryElement (task.Name, "Enter task description", task.Description) }, new Section () { new DateElement ("Due Date", task.DueDate) }, new Section() { new LoadMoreElement("Passive","Active", delegate {MyAction();}) } }; _rootElement [0].Add (taskElement); Where MyAction is:- public void MyAction() { Console.WriteLine ("we have been actioned"); } The problem is that MyAction is triggered and Console.Writeline writes the message but the table stays in the active state and never returns to passive. the documentation says:- Once your code in the NSAction is finished, the UIActivity indicator stops animating and the normal caption is displayed again. What am I missing? Ian

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • Using Linq2Sql to insert data into multiple tables using an auto incremented primary key

    - by Thomas
    I have a Customer table with a Primary key (int auto increment) and an Address table with a foreign key to the Customer table. I am trying to insert both rows into the database in one nice transaction. using (DatabaseDataContext db = new DatabaseDataContext()) { Customer newCustomer = new Customer() { Email = customer.Email }; Address b = new Address() { CustomerID = newCustomer.CustomerID, Address1 = billingAddress.Address1 }; db.Customers.InsertOnSubmit(newCustomer); db.Addresses.InsertOnSubmit(b); db.SubmitChanges(); } When I run this I was hoping that the Customer and Address table automatically had the correct keys in the database since the context knows this is an auto incremented key and will do two inserts with the right key in both tables. The only way I can get this to work would be to do SubmitChanges() on the Customer object first then create the address and do SubmitChanges() on that as well. This would create two roundtrips to the database and I would like to see if I can do this in one transaction. Is it possible? Thanks

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >