Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 421/1022 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • Appending Comment Number Anchors to Comments

    - by John
    Hello, I am using a PHP file called comments.php that has a query that enters values into a mySQL table called "comment." As the query does this, it auto-generates a field called "commentid", which is set to auto_increment in MySQL. The file also contains a loop what echoes out all comments for a given submission. It all works fine and dandy, but I want to simultaneously pull this "commentid" and turn it into a hashtag / anchor that when appended to the end of the URL makes that comment at the top of the user's browser. Someone said on another question that in order to do this one thing I should do is create an anchor on the row where the comment is being printed out. How can I do this? Thanks in advance, John The query that inserts comments into the MySQL table "comment": $query = sprintf("INSERT INTO comment VALUES (NULL, %d, %d, '%s', NULL)", $uid, $subid, $comment); mysql_query($query) or die(mysql_error()); The fields in the table "comment": commentid loginid submissionid comment datecommented The row in a loop where the comments are echoed out: echo '<td rowspan="3" class="commentname1">'.stripslashes($row["comment"]).'</td>';

    Read the article

  • NHibernate. Initiate save collection at saving parent

    - by Andrew Kalashnikov
    Hello, colleagues. I've got a problem at saving my entity. MApping: ?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Clients.Core" namespace="Clients.Core.Domains"> <class name="Sales, Clients.Core" table='sales'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"> <column name="guid"/> </property> <set name="Accounts" table="sales_users" lazy="false"> <key column="sales_id" /> <element column="user_id" type="Int32" /> </set> Domain: public class Sales : BaseDomain { ICollection<int> accounts = new List<int>(); public virtual ICollection<int> Accounts { get { return accounts; } set { accounts = value; } } public Sales() { } } When I save Sales object Account collection don't save at sales_users table. What should I do for saving it? Please don't advice me use classes inside List Thanks a lot.

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • If Html File Has No Ending "/tr" Tag OR "/td" Tag Then HTML Agility Pack Does Not Read That Informat

    - by Harikrishna
    I am using HTML Agility Pack to parse html content. I am using parsing to extract table information. It works. But if there is no ending "/tr" tag or "/td" tag then it does not parse that information perfectly.(in which there is no ending tr tag or td tag.) Like <TABLE border=0><TBODY><TR height=20><TD class=xl27boL noWrap width="7%">01890345&nbsp;</TD> <TD class=xl27boL noWrap width="4%">1416</TD> <TD class=xl27boL noWrap width="7%">kjlkjkls&nbsp;</TD><TD class=xl27boL noWrap width="4%">14:01:57&nbsp;</TD> <TD class=xl27boL noWrap align=left width="15%">Football</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">50&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boRLnoWrap align=right width="8%">249612.64&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">249.86</TD><TD class=xl27boL noWrap align=right width="5%">4992.2528</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boRL noWrap align=right width="8%">249612.64&nbsp;</TD> </table> So for that what should I do ?

    Read the article

  • How do I write this SQL statement to get the ad and posting? (PHP/MySQL)

    - by ggfan
    I am a little confused on the logic of how to write this SQL statement. When a user clicks on a tag, say HTML, it would display all the posts with HTML as its tag. (a post can have multiple tags) I have three tables: Posting--posting_id, title, detail, etc tags--tagID, tagname postingtag--posting_id, tagID I want to display all the title of the post and the date added. global $dbc; $tagID=$_GET['tagID']; //the GET is set by URL //part I need help with. I need another WHERE statment to get to the posting table $query = "SELECT p.title,p.date_added, t.tagname FROM posting as p, postingtag as pt, tags as t WHERE t.tagID=$tagID"; $data = mysqli_query($dbc, $query); echo '<table>'; echo '<tr><td><b>Title</b></td><td><b>Date Posted</b></td></tr>'; while ($row = mysqli_fetch_array($data)) { echo '<tr><td>'.$row['title'].'</td>'; echo '<td>'.$row['date_added'].'</td></tr>'; } echo '</table>'; }

    Read the article

  • SQL problem - select accross multiple tables (user groups)

    - by morpheous
    I have a db schema which looks something like this: create table user (id int, name varchar(32)); create table group (id int, name varchar(32)); create table group_member (foobar_id int, user_id int, flag int); I want to write a query that allows me to so the following: Given a valid user id (UID), fetch the ids of all users that are in the same group as the specified user id (UID) AND have group_member.flag=3. Rather than just have the SQL. I want to learn how to think like a Db programmer. As a coder, SQL is my weakest link (since I am far more comfortable with imperative languages than declarative ones) - but I want to change that. Anyway here are the steps I have identified as necessary to break down the task. I would be grateful if some SQL guru can demonstrate the simple SQL statements - i.e. atomic SQL statements, one for each of the identified subtasks below, and then finally, how I can combine those statements to make the ONE statement that implements the required functionality. Here goes (assume specified user_id [UID] = 1): //Subtask #1. Fetch list of all groups of which I am a member Select group.id from user inner join group_member where user.id=group_member.user_id and user.id=1 //Subtask #2 Fetch a list of all members who are members of the groups I am a member of (i.e. groups in subtask #1) Not sure about this ... select user.id from user, group_member gm1, group_member gm2, ... [Stuck] //Subtask #3 Get list of users that satisfy criteria group_member.flag=3 Select user.id from user inner join group_member where user.id=group_member.user_id and user.id=1 and group_member.flag=3 Once I have the SQL for subtask2, I'd then like to see how the complete SQL statement is built from these subtasks (you dont have to use the SQL in the subtask, it just a way of explaining the steps involved - also, my SQL may be incorrect/inefficient, if so, please feel free to correct it, and point out what was wrong with it). Thanks

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • jquery append row onclick

    - by BigDogsBarking
    Pretty new to jQuery, so I'm hoping someone can help me out... There's a couple of things going on here... Help with any part of it is appreciated. For starters, I'm trying to append a row to a table when a user clicks in the first enabled input field for that row. Here's the code I'm trying to use: $("#fitable > tbody > tr > td > input").bind('focus', function() { if($(this).attr('disabled', false)) { $(this).click(function() { var newRow = '<tr><td><input name="f1[]" value="" /><label>CustNew</label></td><td><input name="field_n1[]" value="" /><label>N1</label></td><td><input name="field_n2[]" value="" /><label>N2</label></td></tr>'; $(this).closest("tbody").append(newRow); }); } }); If it's helpful, here's the html: <table id="fitable"> <tbody> <tr valign="top"> <td><input disabled="disabled" name="cust" id="edit-cust" value="[email protected]" type="text"><label>Cust</label></td> <td><input name="field_n1[]" value="" type="text"><label>N1</label></td> <td><input name="field_n2[]" value="" type="text"><label>N2</label></td> </tr> </tbody> </table>

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • How to display multiple categories and products underneath each category?

    - by shin
    Generally there is a category menu and each link to a category page where shows all the items under that category. Now I need to show all the categories and products underneath with PHP/MySQL in the same page. So it will be like this. Category 1 description of category 1 item 1 item 2 .. Category 2 description of category 2 item 5 item 6 .. Category 3 description of category 3 item 8 item 9 ... ... I have category and product table in my database. But I am not sure how to proceed. CREATE TABLE IF NOT EXISTS `omc_product` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `shortdesc` varchar(255) NOT NULL, `longdesc` text NOT NULL, `thumbnail` varchar(255) NOT NULL, `image` varchar(255) NOT NULL, `product_order` int(11) DEFAULT NULL, `class` varchar(255) DEFAULT NULL, `grouping` varchar(16) DEFAULT NULL, `status` enum('active','inactive') NOT NULL, `category_id` int(11) NOT NULL, `featured` enum('none','front','webshop') NOT NULL, `other_feature` enum('none','most sold','new product') NOT NULL, `price` float(7,2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; CREATE TABLE IF NOT EXISTS `omc_category` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `shortdesc` varchar(255) NOT NULL, `longdesc` text NOT NULL, `status` enum('active','inactive') NOT NULL, `parentid` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; I will appreciate your help. Thanks in advance.

    Read the article

  • jQuery re-checking check boxes

    - by Jacob
    Hi, I having problems with re-checking some checkboxes after and ajax call updates a table. I have a table of what the project calls 'shares'. I want to: 1) check for and save any checked shares to an array 2) Do my ajax call to update the table of shares 3) Re-check any there we checked before the ajax update. My code is not working and I can't see why? Any tips, help or advise much appreciated. // Array to hold our checked share ids var savedShareIDs = new Array(); // Add checked share ids into array $("input:checkbox[name=share_ids]:checked").each(function() { savedShareIDs.push($(this).val()); }); // Do ajax update Dajaxice.pagination_shares('Dajax.process',{'id':1, 'page':1}) // Re-check any that were checked before ajax update $("input:checkbox[name=share_ids]").each(function() { if ( $.inArray( $(this).val(), savedShareIDs) > -1 ) { $(this).attr('checked',true) } }); The problem is the checkboxes are not checking. I'm pretty sure the loop is working and the inArray check works. Just not checking the checkboxes. Can anyone see where I'm going wrong? Thanks.

    Read the article

  • SQLAlchemy Mapping problem

    - by asdvalkn
    Dear Everyone, I am trying to sqlalchemy to correctly map my data. Note that a unified group is basically a group of groups. (One unifiedGroup maps to many groups but each group can only map to one ug). So basically this is the definition of my unifiedGroups: CREATE TABLE `unifiedGroups` ( `ugID` INT AUTO_INCREMENT, `gID` INT NOT NULL, PRIMARY KEY(`ugID`, `gID`), KEY( `gID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; Note that each row is a ugID, gID tuple. ( I do not know before hand how many gID is per ugID so this is probably the most sensible and simplest method). Definition for my UnifiedGroup class class UnifiedGroup(object): """UnifiedProduct behaves very much like a group """ def __init__(self, ugID): self.ugID=ugID #Added by mapping self.groups=False def __str__(self): return '<%s:%s>' % (self.ugID, ','.join( [g for g in self.groups])) These are my mapping tables: tb_groupsInfo = Table( 'groupsInfo', metadata, Column('gID', Integer, primary_key=True), Column('gName', String(128)), ) tb_unifiedGroups = Table( 'unifiedGroups', metadata, Column('ugID', Integer, primary_key=True), Column('gID', Integer, ForeignKey('groupsInfo.gID')), ) My mapper maps in the following manner: mapper( UnifiedGroup, tb_unifiedGroups, properties={ 'groups': relation( Group, backref='unifiedGroup') }) However, when I tried to do groupInstance.unifiedGroup, I am getting an empty list [], while groupInstance.unifiedGroup.groups returns me an error: AttributeError: 'InstrumentedList' object has no attribute 'groups' Traceback (most recent call last): File "Mapping.py", line 119, in <module> print p.group.unifiedGroup.groups AttributeError: 'InstrumentedList' object has no attribute 'groups' What is wrong?

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

  • application blocking

    - by pradeep
    I have a application which runs over browser ...php application... i need to lock this application to only few machines and laptops that we allow to access. IP locking cannot be done as laptops can be on wifi from outside... Any Idea on how to do this?

    Read the article

  • DOM Storage and locks

    - by user535759
    Since DOM storage and its equivalencies persist in between tabs and windows, I've thought about using it for message passing. The problem is that fetch and store are different operations, and therefore not atomic. I have models that rely on UUID generation, conflict resolutions, and beaconing to do the small subset of what I need to do, but my real question is this: Since the local storage is a shared memory resource, what are the locking mechanisms available for mutual access?

    Read the article

  • Setting the height of a row in a JTable in java

    - by Douglas Grealis
    I have been searching for a solution to be able to increase the height of a row in a JTable. I have been using the setRowHeight(int int) method which compiles and runs OK, but no row[s] have been increased. When I use the getRowHeight(int) method of the row I set the height to, it does print out the size I increased the row to, so I'm not sure what is wrong. The code below is a rough illustration how I am trying to solve it. My class extends JFrame. String[] columnNames = {"Column 1", "Column 2", "Column 1 3"}; JTable table = new JTable(new DefaultTableModel(columnNames, people.size())); DefaultTableModel model = (DefaultTableModel) table.getModel(); int count =1; for(Person p: people) { model.insertRow(count,(new Object[]{count, p.getName(), p.getAge()+"", p.getNationality})); count++; } table.setRowHeight(1, 15);//Try set height to 15 (I've tried higher) Can anyone tell me where I am going wrong? I am trying to increase the height of row 1 to 15 pixels?

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • Aggregating a list of dates to start and end date

    - by Joe Mako
    I have a list of dates and IDs, and I would like to roll them up into periods of consucitutive dates, within each ID. For a table with the columns "testid" and "pulldate" in a table called "data": | A79 | 2010-06-02 | | A79 | 2010-06-03 | | A79 | 2010-06-04 | | B72 | 2010-04-22 | | B72 | 2010-06-03 | | B72 | 2010-06-04 | | C94 | 2010-04-09 | | C94 | 2010-04-10 | | C94 | 2010-04-11 | | C94 | 2010-04-12 | | C94 | 2010-04-13 | | C94 | 2010-04-14 | | C94 | 2010-06-02 | | C94 | 2010-06-03 | | C94 | 2010-06-04 | I want to generate a table with the columns "testid", "group", "start_date", "end_date": | A79 | 1 | 2010-06-02 | 2010-06-04 | | B72 | 2 | 2010-04-22 | 2010-04-22 | | B72 | 3 | 2010-06-03 | 2010-06-04 | | C94 | 4 | 2010-04-09 | 2010-04-14 | | C94 | 5 | 2010-06-02 | 2010-06-04 | This is the the code I came up with: SELECT t2.testid, t2.group, MIN(t2.pulldate) AS start_date, MAX(t2.pulldate) AS end_date FROM(SELECT t1.pulldate, t1.testid, SUM(t1.check) OVER (ORDER BY t1.testid,t1.pulldate) AS group FROM(SELECT data.pulldate, data.testid, CASE WHEN data.testid=LAG(data.testid,1) OVER (ORDER BY data.testid,data.pulldate) AND data.pulldate=date (LAG(data.pulldate,1) OVER (PARTITION BY data.testid ORDER BY data.pulldate)) + integer '1' THEN 0 ELSE 1 END AS check FROM data ORDER BY data.testid, data.pulldate) AS t1) AS t2 GROUP BY t2.testid,t2.group ORDER BY t2.group; I use the use the LAG windowing function to compare each row to the previous, putting a 1 if I need to increment to start a new group, I then do a running sum of that column, and then aggregate to the combinations of "group" and "testid". Is there a better way to accomplish my goal, or does this operation have a name? I am using PostgreSQL 8.4

    Read the article

  • How to Map Two Tables To One Class in Fluent NHibernate

    - by Richard Nagle
    I am having a problem with fluent nhiberbate mapping two tables to one class. I have the following database schema: TABLE dbo.LocationName ( LocationId INT PRIMARY KEY, LanguageId INT PRIMARY KEY, Name VARCHAR(200) ) TABLE dbo.Language ( LanguageId INT PRIMARY KEY, Locale CHAR(5) ) And want to build the following class definition: public class LocationName { public virtual int LocationId { get; private set; } public virtual int LanguageId { get; private set; } public virtual string Name { get; set; } public virtual string Locale { get; set; } } Here is my mapping class: public LocalisedNameMap() { WithTable("LocationName"); UseCompositeId() .WithKeyProperty(x => x.LanguageId) .WithKeyProperty(x => x.LocationId); Map(x => x.Name); WithTable("Language", lang => { lang.WithKeyColumn("LanguageId"); lang.Map(x => x.Locale); }); } The problem is with the mapping of the Locale field being from another table, and in particular that the keys between those tables don't match. Whenever I run the application with this mapping I get the following error on startup: Foreign key (FK7FC009CCEEA10EEE:Language [LanguageId])) must have same number of columns as the referenced primary key (LocationName [LanguageId, LocationId]) How do I tell nHibernate to map from LocationName to Language using only the LanguageId field?

    Read the article

  • Symfony generating database from model

    - by Sergej Jevsejev
    Hello, I am having troubles generating a simple database form model. I am using: Doctrine on Symfony 1.4.4 MySQL Workbench 5.2.16 with Doctrine Export 0.4.2dev So my ERL Model is: http://img708.imageshack.us/img708/1716/tmg.png Genereted YAML file: --- detect_relations: true options: collate: utf8_unicode_ci charset: utf8 type: InnoDB Course: columns: id: type: integer(4) primary: true notnull: true autoincrement: true name: type: string(255) notnull: true keywords: type: string(255) notnull: true summary: type: clob(65535) notnull: true Lecture: columns: id: type: integer(4) primary: true notnull: true autoincrement: true course_id: type: integer(4) primary: true notnull: true name: type: string(255) notnull: true description: type: string(255) notnull: true url: type: string(255) relations: Course: class: Course local: course_id foreign: id foreignAlias: Lectures foreignType: many owningSide: true User: columns: id: type: integer(4) primary: true unique: true notnull: true autoincrement: true firstName: type: string(255) notnull: true lastName: type: string(255) notnull: true email: type: string(255) unique: true notnull: true designation: type: string(1024) personalHeadline: type: string(1024) shortBio: type: clob(65535) UserCourse: tableName: user_has_course columns: user_id: type: integer(4) primary: true notnull: true course_id: type: integer(4) primary: true notnull: true relations: User: class: User local: user_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true Course: class: Course local: course_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true And no matter what I try this error occurs after: symfony doctrine:build --all --no-confirmation SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'user_userid' doesn't exist in table. Failing Query: "ALTER TABLE user_has_course ADD CONSTRAINT user_has_course_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id)". Failing Query: ALTER TABLE user_has_course ADD CONSTRAINT user_has_cou rse_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id) Currently I am studying Symfony, and stuck with this error. Please help.

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >