Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 28/1022 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How to access a row from af:table out of context

    - by Vijay Mohan
    Scenario : Lets say you have an adf table in a jsff and it is included as af:region inside other page(parent page).Now your requirement is to access some specific rows from the table and do some operations. Now, since you are aceessing the table outside the context in which it is present, so first you will have to setup the context and then you can use the visitCallback mechanism to do the opeartions on table. Here is the sample code: ================= final RichTable table = this.getRichTable();         FacesContext facesContext = FacesContext.getCurrentInstance();         VisitContext visitContext =   RequestContext.getCurrentInstance().createVisitContext(facesContext,null, EnumSet.of(VisitHint.SKIP_TRANSIENT,VisitHint.SKIP_UNRENDERED), null);         //Annonymous call         UIXComponent.visitTree(visitContext,facesContext.getViewRoot(),new VisitCallback(){             public VisitResult visit(VisitContext context, UIComponent target)               {                   if (table != target)                   {                     return VisitResult.ACCEPT;                   }                   else if(table == target)                   {                       //Here goes the Actual Logic                       Iterator selection = table.getSelectedRowKeys().iterator();                       while (selection.hasNext()) {                           Object key = selection.next();                           //store the original key                           Object origKey = table.getRowKey();                           try {                               table.setRowKey(key);                               Object o = table.getRowData();                               JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)o;                               Row row = rowData.getRow();                               System.out.println(row.getAttribute(0));                           }                           catch(Exception ex){                               ex.printStackTrace();                           }                           finally {                               //restore original key                               table.setRowKey(origKey);                           }                       }                   }                   return VisitResult.COMPLETE;               }         }); 

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Compaq R4000 laptop randomly locking up

    - by Josh
    I have a Compaq R4000 laptop with 2GB of RAM, running Ubuntu Linux 9.10. It is randomly locking up on me, approximately once every two days. I have a second partition with Windows XP Home installed, and I have had the system lock up in XP as well, meaning I believe this is a hardware issue. I have run two passes of Memtest86+ with no errors. The system has a fan that has died, so I initially suspected overheating. However the system just locked up on me while I was in the middle of typing a script to warn me / shut down if the temperature was too high. When the lockup happened the temperature was 88°F, so I am now starting to believe that may not be the issue. When the system locks up, I cannot SSH in nor ping it. Nothing shows in syslog when I reboot. I have configured it to send syslog messages to a local server as well and no messages appear on that server when the lockup happens. I am open to any and all advice!

    Read the article

  • Make a snapshot of a live mySQL database with myISAM & innoDB tables without locking

    - by Artem
    We have a live database in production where we are running out of space on the server. So I would like to transfer to a new server without any downtime (or as little downtime as possible). In general, I would also like to have a hot failover copy of the database available. I would like to use replication to get all of the data copied to the new machine, and then at some point flip a switch and have that new machine become the master (normal failover scenario). My problem is that I am not sure how to initialize replication without locking the db to make the initial snapshot I will use? Is there any way to do this? I know I could do it using single-transaction if I was using innoDB, but very unfortunately we have some myISAM tables in there (in fact the largest 150GB table is myISAM and I want to switch it to InnoDB but I can't do it until I have more space & a hot copy to switch to). Any ideas? Is there some way to make such a snapshot? Or is there alternatively a way to get replication to "catch up" without an snapshot for initialization?

    Read the article

  • SQL Server Read Locking behavior

    - by Charles Bretana
    When SQL Server Books online says that "Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction." Assuming we're talking about a row-level lock, with no explicit transaction, at default isolation level (Read Committed), what does "read operation" refer to? The reading of a single row of data? The reading of a single 8k IO Page ? or until the the complete Select statement in which the lock was created has finished executing, no matter how many other rows are involved? NOTE: The reason I need to know this is we have a several second read-only select statement generated by a data layer web service, which creates page-level shared read locks, generating a deadlock due to conflicting with row-level exclusive update locks from a replication prcoess that keeps the server updated. The select statement is fairly large, with many sub-selects, and one DBA is proposing that we rewrite it to break it up into multiple smaller statements (shorter running pieces), "to cut down on how long the locks are held". As this assumes that the shared read locks are held till the complete select statement has finished, if that is wrong (if locks are released when the row, or the page is read) then that approach would have no effect whatsoever....

    Read the article

  • NFS-shared file-system is locking up

    - by fredden
    Our NFS-shared file-system is locking up. Please feel free to ask any questions you feel relevant. :) At the time, there are a lot of processes in "disk sleep" state, and the load averages on our machines sky-rocket. The machines are responsive on SSH, but our the majority of our websites (apache+mod_php) just hang, as does our email system (exim+dovecot). Any websites which don't require write access to the file-system continue to operate. The load averages continue to rise until some kind of time-out is reached, but for at least 10-15 minutes. I've seen load averages over 800, yet the machines are still responsive for actions which don't require writing to the shared file-system. I've been investigating a variety of options, which have all turned out to be red-herrings: nagios, proftpd, bind, cron tasks. I'm seeing these messages in the file server's system log: Jul 30 09:37:17 fs0 kernel: [1810036.560046] statd: server localhost not responding, timed out Jul 30 09:37:17 fs0 kernel: [1810036.560053] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:37:17 fs0 kernel: [1810036.560064] lockd: cannot monitor node2 Jul 30 09:38:22 fs0 kernel: [1810101.384027] statd: server localhost not responding, timed out Jul 30 09:38:22 fs0 kernel: [1810101.384033] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:38:22 fs0 kernel: [1810101.384044] lockd: cannot monitor node0 Software involved: VMWare, Debian lenny (64bit), ancient Red Hat (32 bit) (version 7 I believe), Debian etch (32bit) NFS, apache2+mod_php, exim, dovecot, bind, amanda, proftpd, nagios, cacti, drbd, heartbeat, keepalived, LVS, cron, ssmtp, NIS, svn, puppet, memcache, mysql, postgres Joomla!, Magento, Typo3, Midgard, Symfony, custom php apps

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • ProgrammingError: (1146, "Table 'test_<DB>.<TABLE>' doesn't exist") when running unit test for Djang

    - by abigblackman
    I'm running a unit test using the Django framework and get this error. Running the actual code does not have this problem, running the unit tests creates a test database on the fly so I suspect the issue lies there. The code that throws the error looks like this member = Member.objects.get(email=email_address) and the model looks like class Member(models.Model): member_id = models.IntegerField(primary_key=True) created_on = models.DateTimeField(editable=False, default=datetime.datetime.utcnow()) flags = models.IntegerField(default=0) email = models.CharField(max_length=150, blank=True) phone = models.CharField(max_length=150, blank=True) country_iso = models.CharField(max_length=6, blank=True) location_id = models.IntegerField(null=True, blank=True) facebook_uid = models.IntegerField(null=True, blank=True) utc_offset = models.IntegerField(null=True, blank=True) tokens = models.CharField(max_length=3000, blank=True) class Meta: db_table = u'member' there's nothing too odd there i can see. the user running the tests has the same permissions to the database server as the user that runs the website where else can I look to see what's going wrong, why is this table not being created?

    Read the article

  • How to rotate table-headline in Latex table

    - by pagid
    Hi, is there a way to rotate the "Demo 1", "Demo2" and "Demo 3" headlines 90° in the following LaText table? \documentclass[a4paper,twoside,10pt]{report} \begin{document} \begin{tabular}{|l|l|l|l|} \hline & Demo1 & Demo2 & Demo3 \\ \hline Person 1 & x & & \\ \hline Person 2 & x & & x \\ \hline Person 3 & x & x & \\ \hline Person 4 & & x & x \\ \hline \end{tabular} \end{document} Thanks

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Table Alias in SubSonic

    - by rockacola
    How can I assign alias to tables with SubSonic 2.1? I am trying to reproduce the following query: SELECT * FROM posts P RIGHT OUTER JOIN post_meta X ON P.post_id = X.post_id RIGHT OUTER JOIN post_meta Y ON P.post_id = Y.post_id WHERE X.meta_key = "category" AND X.meta_value = "technology" AND Y.meta_key = "keyword" AND Y.meta_value = "cloud" I'm am using SubSonic 2.1 and upgrading to 2.2 isn't an option (yet). Thanks.

    Read the article

  • How to JOIN a COUNT from a table, and then effect that COUNT with another JOIN

    - by jakenoble
    Hi I have three tables Post ID Name 1 'Something' 2 'Something else' 3 'One more' Comment ID PostId ProfileID Comment 1 1 1 'Hi my name is' 2 2 2 'I like cakes' 3 3 3 'I hate cakes' Profile ID Approved 1 1 2 0 3 1 I want to count the comments for a post where the profile for the comment is approved I can select the data from Post and then join a count from Comment fine. But this count should be dependent on if the Profile is approved or not. The results I am expecting is CommentCount PostId Count 1 1 2 0 3 1 Thanks for any help.

    Read the article

  • Collapse Tables with specific Table ID? JavaScript

    - by medoix
    I have the below JS at the top of my page and it successfully collapses ALL tables on load. However i am trying to figure out how to only collapse tables with the ID of "ctable" or is there some other way of specifying the tables to make collapsible etc? <script type="text/javascript"> var ELpntr=false; function hideall() { locl = document.getElementsByTagName('tbody'); for (i=0;i<locl.length;i++) { locl[i].style.display='none'; } } function showHide(EL,PM) { ELpntr=document.getElementById(EL); if (ELpntr.style.display=='none') { document.getElementById(PM).innerHTML=' - '; ELpntr.style.display='block'; } else { document.getElementById(PM).innerHTML=' + '; ELpntr.style.display='none'; } } onload=hideall; </script>

    Read the article

  • ValueError with multi-table inheritance in Django Admin

    - by jorde
    I created two new classes which inherit model Entry: class Entry(models.Model): LANGUAGE_CHOICES = settings.LANGUAGES language = models.CharField(max_length=2, verbose_name=_('Comment language'), choices=LANGUAGE_CHOICES) user = models.ForeignKey(User) country = models.ForeignKey(Country, null=True, blank=True) created = models.DateTimeField(auto_now=True) class Comment(Entry): comment = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) class Discount(Entry): discount = models.CharField(max_length=2000, blank=True, verbose_name=_('Comment in English')) coupon = models.CharField(max_length=2000, blank=True, verbose_name=_('Coupon code if needed')) After adding these new models to admin via admin.site.register I'm getting ValueError when trying to create a comment or a discount via admin. Adding entries works fine. Error msg: ValueError at /admin/reviews/discount/add/ Cannot assign "''": "Discount.discount" must be a "Discount" instance. Request Method: GET Request URL: http://127.0.0.1:8000/admin/reviews/discount/add/ Exception Type: ValueError Exception Value: Cannot assign "''": "Discount.discount" must be a "Discount" instance. Exception Location: /Library/Python/2.6/site-packages/django/db/models/fields/related.py in set, line 211 Python Executable: /usr/bin/python Python Version: 2.6.1

    Read the article

  • SQL: Recursively get parent records using Common Table Expressions

    - by Martijn B
    Hi there, Suposse you have to following tables where a sale consists of products and a product can be placed in multiple categories. Whereby categories have a hierachly structure like: Man Shoes Sport Casual Watches Women Shoes Sport Casual Watches Tables: Sale: id name 1 Sale1 Product: id saleidfk name 1 1 a 2 1 b 3 1 c 4 1 d 5 1 e ProductCategory : productid categoryid 1 3 2 3 3 4 4 5 5 10 Category: id ParentCategoryIdFk name 1 null Men 2 1 Shoes 3 2 Sport 4 2 Casual 5 1 Watches 6 null Women 7 6 Shoes 8 7 Sport 9 7 Casual 10 6 Watches Question: Now on my website I want to create a control where only the categories are shown of a certain sale and where the categories are filled with the products of the sale. I also want to include the hierachly structure of the categories. So if we have a leave node, recusivly go up to the top node. So with sale1 I should have a query with the following result: Men Shoes Sport Casual Watches Women Watches This thing is driving me crazy :-) Thanks in advance! Gr Martijn

    Read the article

  • One-to-many relationship in the same table in zend

    - by Behrang
    I have groupTable(group_id,group_name,group_date,group_parent_id) in face each group have many group child I create groupModel and I want to begin coding is this right code to handle protected $_name = 'group'; protected $_dependentTables = array('Model_group'); protected $_referenceMap = array('Model_group' = array('columns' = array('group_parent_id') , 'refTableClass' = 'Model_group' , 'refColumns' = array('group_id') , 'onDelete' = self::CASCADE , 'onUpdate' = self::RESTRICT) );

    Read the article

  • Table Partitioning

    - by Ankur Gahlot
    How advantageous is it to use partitioning of tables as compared to normal approach ? Is there a sort of sample case or detailed comparative analysis that could statistically ( i know this is too strong a word, but it would really help if it is illustrated by some numbers ) emphasize on the utility of the process. Thanks, Ankur

    Read the article

  • Multiple Table Inheritance vs. Single Table Inheritance in Ruby on Rails

    - by Tony
    I have been struggling for the past few hours thinking about which route I should go. I have a Notification model. Up until now I have used a notification_type column to manage the types but I think it will be better to create separate classes for the types of notifications as they behave differently. Right now, there are 3 ways notifications can get sent out: SMS, Twitter, Email Each notification would have: id subject message valediction sent_people_count deliver_by geotarget event_id list_id processed_at deleted_at created_at updated_at Seems like STI is a good candidate right? Of course Twitter/SMS won't have a subject and Twitter won't have a sent_people_count, valediction. I would say in this case they share most of their fields. However what if I add a "reply_to" field for twitter and a boolean for DM? My point here is that right now STI makes sense but is this a case where I may be kicking myself in the future for not just starting with MTI? To further complicate things, I want a Newsletter model which is sort of a notification but the difference is that it won't use event_id or deliver_by. I could see all subclasses of notification using about 2/3 of the notification base class fields. Is STI a no-brainer, or should I use MTI? Thanks!

    Read the article

  • Jquery table cell

    - by Parhs
    Hello...This code produces a mess... What am i doing wrong??? cell=$("<td>"); if(normal.exam_type=="Exam_Boolean") { var input=cell.append("<input>").last(); input.attr("type","hidden"); input.attr("name","exam.exam_Normal['" +normal_id_unique + "'].boolean_v"); input.attr("value",normal.normal_boolean);

    Read the article

  • Simplest possible table entry editing / addition / deletion web toolkit (for use with php)

    - by Dave
    Hi, i'm building a website in php and i have tables presented that i need to allow the user to: 1. add new entry (only one at a time, which should appear as a new modal overlay) 2. delete multiple selected entries from 3. edit an existing entry (only one at one time, in a view similar to 1.) 4. re-arrange entries up and down. One by one is fine. Multiple / Grouping rearrangements are not not needed what jquery / js / anything toolkit would be the SIMPLEST to work with? (of course, i should be able to work with it in php). I did try hacking away at: http://www.ericmmartin.com/projects/simplemodal/ but had a terrible time trying to get it to work on editing some existing data (had problem passing data to it).

    Read the article

  • PHP edit unique row in table

    - by Robert
    I currently have a PHP form that uses AJAX to connect to MySQL and display records matching a user's selection (http://stackoverflow.com/questions/2593317/ajax-display-mysql-data-with-value-from-multiple-select-boxes) As well as displaying the data, I also place an 'Edit' button next to each result which displays a form where the data can be edited. My problem is editing unique records since currently I only use the selected values for 'name' and 'age' to find the record. If two (or more) records share the same name and age, I am only able to edit the first result.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >