Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 207/293 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • jquery: call functions immediately after plugin

    - by Dave
    I'm sure that there's an easy answer to this, but I can't find it. I have a table 'myTable' which I stripe using the following $("#myTable tr:even").css({ "background-color": "#FEE996" }); $("#myTable tr:odd").css({ "background-color": "#FFEFAF" }); This works fine. I am also using a table filter plugin as follows $('#myTable').tableFilter(); This plugin places a blank field at the top of each column into which the filter criteria can be typed. When the table is filtered it removes unmatched rows, which in turn messes up the striping. I would like to be able to re-invoke the lines to re-stripe the table. Something like $('#myTable').tableFilter().find("tr:even").css({ "background-color": "#FEE996" }).find("tr:even").css({ "background-color": "#FFEFAF" }); Is this possible please?

    Read the article

  • converting a form from text to textarea

    - by David Cook
    I have a form created to pull PHP values into my database. I created the form with all type="text" constructions. What follows is the code that used to set up the input of data and confirmed that it is functional. <label>About Me: <input type="text" name="BIO_info"/></label> I converted the input to a textarea and adjusted some parameters for proper display. Unfortunately, it has broken the ability for the script to function. What follows is the code I wrote to convert and store from a text area input. <label for="BIO_info" style=" margin-bottom: 500px; margin-top: 2000px; ">About Me: <textarea name="BIO_info" rows="20" cols="60" style="resize: none; overflow-y: hidden;vertical-align:middle;"></textarea> <p> I would appreciate any suggestions.

    Read the article

  • Hibernate deletion issue

    - by muffytyrone
    I'm trying to write a Java app that imports a data file. The process is as follows Create Transaction Delete all rows from datatable Load data file into datatable Commit OR Rollback if any errors were encountered. The data loaded in step 3 is mostly the same as the data deleted in step3. The deletion is performed using the following DetachedCriteria criteria = DetachedCriteria.forClass(myObject.class); List<myObject> myObjects = hibernateTemplate.findByCriteria(criteria); hibernateTemplate.deleteAll(myObjects); When I then load the datafile, i get the following exception nested exception is org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session: The whole process needs to take place in transaction. And I don't really want to have to compare the import file / data table and then perform an insert/update/delete to get them into sync. Any help would be appreciated.

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Hibernate / MySQL Bulk insert problem

    - by Marty Pitt
    I'm having trouble getting Hibernate to perform a bulk insert on MySQL. I'm using Hibernate 3.3 and MySQL 5.1 At a high level, this is what's happening: @Transactional public Set<Long> doUpdate(Project project, IRepository externalSource) { List<IEntity> entities = externalSource.loadEntites(); buildEntities(entities, project); persistEntities(project); } public void persistEntities(Project project) { projectDAO.update(project); } This results in n log entries (1 for every row) as follows: Hibernate: insert into ProjectEntity (name, parent_id, path, project_id, state, type) values (?, ?, ?, ?, ?, ?) I'd like to see this get batched, so the update is more performant. It's possible that this routine could result in tens-of-thousands of rows generated, and a db trip per row is a killer. Why isn't this getting batched? (It's my understanding that batch inserts are supposed to be default where appropriate by hibernate).

    Read the article

  • How can I generate a random human-readable colour from a seed? C#

    - by SLC
    Got a logfile, and it has all kinds of text in it. Currently it is just displayed as one colour, and each entry says something like: Log from section 1: Some text here Log from section 125: Some text here Log from section 17: Some text here Log from section 1: Some text here Log from section 125: Some text here Log from section 1: Some text here Log from section 17: Some text here Now the logfile is displayed in real time, and it would be nice to make the rows with the same section number the same colour. However there could be potentially quite a large range of numbers. What I want to do is create a method that will take a number, and randomly generate a unique colour. The colour must be readable against a black background though, so #000000 is no good, nor is #101010 or anything too dark to read. Ideally two similar numbers will not produce the same colour because in the above examples, the numbers 1 and 17 might be too similar, and some numbers might be in the 10,000 range. Any ideas on this?

    Read the article

  • Find host from ItemContainerGenerator.itemChanged Event

    - by Mohanavel
    I'm working on C# 4.0, WPF. I have three ListView, and all three control have the same ItemContainerGenerator_ItemsChanged" event. So my problem is, when ever the event triggered, i have to find the host. lst1.ItemContainerGenerator.ItemsChanged += new System.Windows.Controls.Primitives.ItemsChangedEventHandler(ItemContainerGenerator_ItemsChanged); lst2.ItemContainerGenerator.ItemsChanged += new System.Windows.Controls.Primitives.ItemsChangedEventHandler(ItemContainerGenerator_ItemsChanged); lst3.ItemContainerGenerator.ItemsChanged += new System.Windows.Controls.Primitives.ItemsChangedEventHandler(ItemContainerGenerator_ItemsChanged); void ItemContainerGenerator_ItemsChanged(object sender, System.Windows.Controls.Primitives.ItemsChangedEventArgs e) { //TODO: Find host and proceed. **REAL Problem** // ListViewItem's Visible property has been set based on the deletion button click, // So at one place i have to get the count of rows which are visible and proceed // with related buttons enable/disable operation. }

    Read the article

  • pluralize and singularize for spanish language

    - by el_quick
    Hello, sorry for my english... I have a rails application developed to spain, therefore, all content is in spanish, so, I have a search box to search in a mysql database, all rows are in spanish, I'd like to improve my search to allow to users to search keywords in singular or plural form, for example: keyword: patatas found: patata keyword: veces found: vez keyword: vez found: veces keyword: actividades found: actividad In english, this could be relatively easy with help of singularize and pluralize methods ... where `searching_field` like '%singularized_keyword%' or `searching_field` like '%pluralized_keyword%' But, for spanish.... Some help? Thanks!

    Read the article

  • INSERT SELECT Statement and Rollback SQL

    - by Juan Perez
    Im Working on a creation of a query who uses INSERT SELECT statement using MS SQL Server 2008: INSERT INTO TABLE1 (col1, col2) SELECT col1, col2 FROM TABLE2 Right now the excecution of this query is inside a transaction: Pseudocode: try { begin transaction; query; commit; } catch { rollback; } If TABLE2 has around 40m of rows, at the moment of making the insert on the TABLE1, if there is an error in the middle of the INSERT, will the INSERT SELECT statement make a rollback itself or I need to use a transaction to preserve data integrity? It is necessary to use a transaction? or SQL SERVER it self uses a transaction for this type of sentences.

    Read the article

  • MS SQL Cursor data as result of stored procedure

    - by Dmitry Borovsky
    Hello, I have some stored procedure DECLARE cursor FOR SELECT [FooData] From [FooTable]; OPEN cursor ; FETCH NEXT FROM cursor INTO @CurrFooData; WHILE @@FETCH_STATUS = 0 BEGIN SELECT @CurrFooData AS FooData; INSERT INTO Bar (BarData) VALUES(@CurrFooData); FETCH NEXT FROM cursor INTO @CurrFooData; END; CLOSE OldestFeeds DEALLOCATE OldestFeeds But in result I have a lot of tables, not one. How can I return one table with 'FooData' column and all '@CurrFooData' rows?

    Read the article

  • How to select the range for pasting using vba

    - by user1616384
    I wrote some code for selecting the particular row and pasting it in column wise using paste-special property. It is working correctly my code is : lngRow = Me.TextBox4.Value strCol = Me.TextBox5.Value Set rng = Range("A:A").Find(What:=lngRow, LookIn:=xlValues, LookAt:=xlWhole) If rng Is Nothing Then MsgBox "Value not found in row 1", vbExclamation Else Range(rng, rng.End(xlToRight)).Copy Range("A1:E3").Columns(strCol).Offset(, 1).PasteSpecial Transpose:=True Range("A1:E3").Rows(1).Copy Range("A1:E3").Columns(strCol).PasteSpecial Transpose:=True endif the problem here is I am using Range(rng, rng.End(xlToRight)).Copy to copy the values and for pasting I am using Range("A1:E3").Columns(strCol).Offset(, 1).PasteSpecial Transpose:=True. How can I paste all the values which are copied? Because if the values are in column F then this macro will not paste those values.

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • How to paste text and variables into a logical expression in R?

    - by Jasper
    I want to paste variables in the logical expression that I am using to subset data, but the subset function does not see them as column names when pasted (either with ot without quotes). I have a dataframe with columns named col1, col2 etc. I want to subset for the rows in which colx < 0.05 This DOES work: subsetdata<-subset(dataframe, col1<0.05) subsetdata<-subset(dataframe, col2<0.05) This does NOT work: for (k in 1:2){ subsetdata<-subset(dataframe, paste("col",k,sep="")<0.05) } for (k in 1:2){ subsetdata<-subset(dataframe, noquote(paste("col",k,sep=""))<0.05) } I can't find the answer; any suggestions?

    Read the article

  • Recommendation of Jquery Table pager plugin?

    - by chobo2
    Hi I was trying to use the pager plugin that comes with the tablesorter plugin but I can't get it to work as you can see from my previous post http://stackoverflow.com/questions/2836680/need-help-with-jquery-tablesorter-pager-plugin. I given up on this plugin as no one can seem to come up with a solution how to make it work and I kinda need to get this place soon. So now I am looking for a new one but it must have the following features. Work on tables Work on tables that have the tablesorter 2.0 plugin on it( so I don't want a pager plugin that comes with its own table sorter since I don't want to change that. It should be a standalone pager plugin). Be able to add rows dynamically to the table and some how update the pager so this row now becomes part of the pager. Thanks

    Read the article

  • CodeIgniter Active Record Queries W/ Sub Queries

    - by Mike
    Question: I really am trying to stick to using ActiveRecord and not using straight SQL.. can someone help me convert this to activerecord? Trying to get the email address and contact name from another table. map_userfields table is a one to many, multiple rows per p.id. one row per p.id per uf.fieldid. see this screenshot for a reference to the map_userfields table: Current Non active record query SELECT p.id, (SELECT uf.fieldvalue FROM map_userfields uf WHERE uf.pointid = p.id AND uf.fieldid = 20) As ContactName, (SELECT uf.fieldvalue FROM map_userfields uf WHERE uf.pointid = p.id AND uf.fieldid = 31) As ContactEmail FROM map_points p WHERE /** $pointCategory is an array of categories to look for **/ p.type IN($pointCategory) Note: I am using CodeIgniter 2.1.x, MySQL 5.x, php 5.3

    Read the article

  • Dirty Reads in Postgres

    - by User1
    I have a long running function that should be inserting new rows. How do I check the progress of this function? I was thinking dirty reads would work so I read http://www.postgresql.org/docs/8.4/interactive/sql-set-transaction.html and came up with the following code and ran it in a new session: SET SESSION CHARACTERISTICS AS SERIALIZABLE; SELECT * FROM MyTable; Postgres gives me a syntax error. What am I doing wrong? If I do it right, will I see the inserted records while that long function is still running? Thanks

    Read the article

  • do.call(rbind, list) for uneven number of column

    - by h.l.m
    I have a list, with each element being a character vector, of differing lengths I would like to bind the data as rows, so that the column names 'line up' and if there is extra data then create column and if there is missing data then create NAs Below is a mock example of the data I am working with x <- list() x[[1]] <- letters[seq(2,20,by=2)] names(x[[1]]) <- LETTERS[c(1:length(x[[1]]))] x[[2]] <- letters[seq(3,20, by=3)] names(x[[2]]) <- LETTERS[seq(3,20, by=3)] x[[3]] <- letters[seq(4,20, by=4)] names(x[[3]]) <- LETTERS[seq(4,20, by=4)] The below line would normally be what I would do if I was sure that the format for each element was the same... do.call(rbind,x) I was hoping that someone had come up with a nice little solution that matches up the column names and fills in blanks with NAs whilst adding new columns if in the binding process new columns are found...

    Read the article

  • Using Linq on a Dataset

    - by JasonMHirst
    Can someone enlighthen me with regards to Linq please? I have a dataset that is populated via a SQL Stored Procedure, the format of which is below: Country | Brand | Variant | 2004 | 2005 | 2006 | 2007 | 2008 The number of rows varies between 50 and several thousand. What I'm trying to do is use Linq to interrogate the dataset (there will be several Linq queries based on user options), but a simple example would be to SUM the year columns based on Brand. I have the following that I believe creates a template for me to work with: But from here on I'm absolutely stuck! sqlDA.Fill(ds, "Profiler") Dim brandsQuery = From cust In ds.Tables(0).AsEnumerable() Select _BrandName = cust.Item("BrandName"), _y0 = cust.Item("1999"), _y1 = cust.Item("2004"), _y2 = cust.Item("2005"), _y3 = cust.Item("2006"), _y4 = cust.Item("2007"), _y5 = cust.Item("2008") I'm tried to look at examples, but can't see any that are VB.Net based and/or show me how to Sum/Group. Can someone please provide an example so I can perhaps learn from it. Thanks.

    Read the article

  • jQuery add/remove row

    - by bocca
    I am trying to setup jQuery rows, with add/remove row functionality. I got started with online tutorial, http://jsbin.com/aciba that works fine. It only has input forms though. We need select to choose from. Now, I made some changes to accept selects as as well as inputs: http://jsbin.com/emata This does not work. Try selecting option 3 in cell 1, and press add "Add". The row added gets default "Cell 1" selected option to 1. What could be the reason?

    Read the article

  • Need help setting up json array

    - by torr
    A database query returns several rows which I loop through as follows: foreach ($query->result() as $row) { $data[$row->post_id]['post_id'] = $row->post_id; $data[$row->post_id]['post_type'] = $row->post_type; $data[$row->post_id]['post_text'] = $row->post_text; } If I json_encode the resulting array ($a['stream']) I get { "stream": { "1029": { "post_id": "1029", "post_type": "1", "post_text": "bla1", }, "1029": { "post_id": "1030", "post_type": "3", "post_text": "bla2", }, "1029": { "post_id": "1031", "post_type": "2", "post_text": "bla3", } } } But the json should actually look like this: { "stream": { "posts": [{ "post_id": "1029", "post_type": "1", "post_text": "bla1", }, { "post_id": "1030", "post_type": "3", "post_text": "bla2", }, { "post_id": "1031", "post_type": "2", "post_text": "bla3", }] } } How should I build my array to get this json right?

    Read the article

  • Python CSV file processing

    - by kingwarchief
    I just got introduced to python, the first language I get to learn, and I have this question below: I have an excel based CSV file with two columns (or rows, Pythonically) that I am working on. What I need to do is to perform some operations so that I can compare the two data entries in each 'row'. To be more precise, one column has constant numbers all the way down, whereas the other column varies. So I need to count the number of times the varying column data entry values crosses the constant value on the other column. For example: Varying Column; Constant Column 24 25 26 25 crosses 27 25 26 25 25.5 25 23 25 crossed 26 25 crossed So in this case the number of times there is a cross

    Read the article

  • Manipulate Excel workbooks programmatically

    - by Tom
    I have an Excel workbook that I want to use as a template. It has several worksheets setup, one that produces the pretty graphs and summarizes the numbers. Sheet 1 needs to be populated with data that is generated by another program. The data comes in a tab delimited file. Currently the user imports the tab delimited file into a new Workbook, selects all and copies. Then goes to the template and pastes the data into sheet1. This is a large amount of data, 269 columns and over 135,000 rows. It’s a cumbersome process and the users are not experienced Excel users. All they really want is the pretty graphs. I would like to add a step after the program that generates the data to programmatically automate the process the user currently must do manually. Can anyone suggest the best method/programming language that could accomplish this?

    Read the article

  • tsql sum data and include default values for missing data

    - by markpirvine
    Hi, I would like a query that will shouw a sum of columns with a default value for missing data. For example assume I have a table as follows: type_lookup: id name 1 self 2 manager 3 peer And a table as follows data: id type_lookup_id value 1 1 1 2 1 4 3 2 9 4 2 1 5 2 9 6 1 5 7 2 6 8 1 2 9 1 1 After running a query I would like a result set as follows: type_lookup_id value 1 13 2 25 3 0 I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table. Any help would be greatly appreciated, Mark

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >