Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 207/293 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Arranging image thumbnails

    - by Adi Mathur
    I am using JQuery hover zoom (http://jmar.github.com/jquery-hoverZoom/) Its working fine. The thumbnails that I have is of different sizes . So I stated using the masonry plugin for arranging it. Both of them work fine when isolated but collectively the masonry plugin doesn't work as intended. Pictures start to overlap each other. What I feel is that both Masonry and the JQuery hover zoom , interact with the same div element which causes the problem, Both are adding their attributes to it. How can I fix this ? Is there some way that I arrange the rows without the masonry? So that the conflict wont occur ?

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • ExtJS Grid slow with 3000+ records

    - by Oliver Watkins
    I am using ExtJS Grid and its getting pretty slow with 3000+ records. Sorting takes about 4 seconds. Compared to other more Javascript tables, this is pretty slow. I am thinking maybe to use pagination in my table. However after reading the documentation, I am still a bit unsure about how pagination works in extjs. Does this pull data from the server each time u turn a page? I would prefer that wasn't the case. I would prefer the 3000 records are saved in the browser and then what is rendered is just a portion of those rows. Also I am using Extjs version 4.2.1. If I upgrade to version 5. will I get some performance improvements?

    Read the article

  • using javascript replace() to match the last occurance of a string

    - by Dave
    I'm building an 'add new row' function for product variations, and I'm struggling with the regex required to match the form attribute keys. So, I'm basically cloning rows, then incrementing the keys, like this (coffeescript): newrow = oldrow.find('select, input, textarea').each -> this.name = this.name.replace(/\[(\d+)\]/, (str, p1) -> "[" + (parseInt(p1, 10) + 1) + "]" ) this.id = this.id.replace(/\_(\d+)\_/, (str, p1) -> "_" + (parseInt(p1, 10) + 1) + "_" ) .end() This correctly increments a field with a name of product[variations][1][name], turning it into product[variations][2][name] BUT Each variation can have multiple options (eg, color can be red, blue, green), so I need to be able turn this product[variations][1][options][2][name] into product[variations][1][options][3][name], leaving the variation key alone. What regex do I need to match only the last occurrence of a key (the options key)?

    Read the article

  • ASP.NET: How to assign ID to a field in DetailsView?

    - by jawonlee
    I have a master-detail page, in which I use GridView to display multiple rows of data, and DetailsView + jQuery dialog to display the details of a single records. only one DetailsView is open at a time. I need to be able to pull out a single field of the open DetailsView, for manipulation using JavaScript. Is there a way to give a unique ID to a given field in DetailsView, so I can use getElementByID? Or is there another way to accomplish what I'm trying to do? Thank you in advance.

    Read the article

  • MySQL Query WHERE Including CASE or IF?

    - by handfix
    Strange problem. My Query looks like SELECT DISTINCT ID, `etcetc`, `if/elses over muliple joined tables` FROM table1 AS `t1` # some joins, eventually unrelated in that context WHERE # some standard where statements, they work/ CASE WHEN `t1`.`field` = "foo" THEN (`t1`.`anOtherField` != 123 AND `t1`.`anOtherField` != 456 AND `t1`.`anOtherOtherField` != "some String") WHEN `t1`.`field` = "bar" THEN `t1`.`aSecondOtherField` != 12345 END #ORDER BY CASE etc. Standard Stuff Apperantly MySQL returns a wrong rowcount and I think my problem is in the logic of the WHERE ... CASE statement. Maybe with the brackets? Maybe I should go for operator OR and not AND? Should my the second WHEN include brackets also, even when I only compare one field? Should I use IF and not CASE? Basically I want to exclude some rows with specific values IF theres a specific value in field foo or bar I would try that all out, but it takes a huge amount of time to complete that query... :(

    Read the article

  • How do I generate a random time interval and add it to a mysql datetime using php?

    - by KeenLearner
    I have many rows in mysql table with datetime's in the format of: 2008-12-08 04:16:51 etc I'd like to generate a random time interval of anywhere between 30 seconds, and 3 days and add them to the time above. a) how do I generate a random time between 30 and 3 days? b) how do I add this time to the date time format above? I imagine i need to do a loop to pull out all the info, do the math in php, and then update the row... Any ideas?

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Placing error message for a checkbox array

    - by eddy
    Hello all. I am using the Validation Plugin for jQuery and it works wonders. Except when I have a group of checkboxes...the error messages will display right after the first checkbox...like so: <tbody> <c:forEach items="${list}" var="item"> <tr> <td align="center"> <input type="checkbox" name="selectItems" value="<c:out value="${item.numberPlate}"/>" /> </td> <!--some other columns--> </tr> </c:forEach> </tbody> I found that I can use a wrapper for these checkboxes ,then place the error message there, but I have no idea how to do it since I'm creating the rows dynamically. Hope you can help me out.

    Read the article

  • Getting DateTimeOffset value form SQL 2008 to C#

    - by Darvis Lombardo
    I have a SQL 2008 table with a field called RecDate of type DateTimeOffset. For a given record the value is '2010-04-01 17:19:23.62 -05:00' In C# I create a DataTable and fill it with the results of "SELECT RecDate FROM MyTable". I need to get the milliseconds, but if I do the following the milliseconds are always 0: DateTimeOffset dto = DateTimeOffset.Parse(dt.Rows[0][0].ToString()); What is the proper way to get the value in the RecDate column into the dto variable? Thanks! Darvis

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

  • Manipulate Excel workbooks programmatically

    - by Tom
    I have an Excel workbook that I want to use as a template. It has several worksheets setup, one that produces the pretty graphs and summarizes the numbers. Sheet 1 needs to be populated with data that is generated by another program. The data comes in a tab delimited file. Currently the user imports the tab delimited file into a new Workbook, selects all and copies. Then goes to the template and pastes the data into sheet1. This is a large amount of data, 269 columns and over 135,000 rows. It’s a cumbersome process and the users are not experienced Excel users. All they really want is the pretty graphs. I would like to add a step after the program that generates the data to programmatically automate the process the user currently must do manually. Can anyone suggest the best method/programming language that could accomplish this?

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • Adding name and id properties to textarea (struts)

    - by reg3n
    Hi, i mostly do CSS and php so i'm kind'a lost here, so no idea if this is possible the way i want it anyway, this is it: I have this code <html:textarea rows="10" cols="70" property="thankYouMessage" /> And i want this textarea to render an id of "textareaID" and a name like "textareaname" how can i go about this?... if i use styleID, the page just won't load anymore... i need to apply some css to that markup so that's the thing. Thanks in advance!

    Read the article

  • How to insert an n:m-relationship with technical primary keys generated by a sequence?

    - by bitschnau
    Let's say I have two tables with several fields and in every table there is a primary key which is a technical id generated by a database sequence: table1 table2 ------------- ------------- field11 <pk> field21 <pk> field12 field22 field11 and field21 are generated by sequences. Also there is a n:m-relationship between table1 und table2, designed in table3: table3 ------------- field11 <fk> field21 <fk> The ids in table1 und table2 are generated during the insert statement: INSERT INTO table1 VALUES (table1_seq1.NEXTVAL, ... INSERT INTO table2 VALUES (table2_seq1.NEXTVAL, ... Therefore I don't know the primary key of the added row in the data-access-layer of my program, because the generation of the pk happens completely in the database. What's the best practice to update table3 now? How can I gain access to the primary key of the rows I just inserted?

    Read the article

  • Get the identity value from an insert via a .net dataset update call

    - by DeveloperMCT
    Here is some sample code that inserts a record into a db table: Dim ds As DataSet = New DataSet() da.Fill(ds, "Shippers") Dim RowDatos As DataRow RowDatos = ds.Tables("Shippers").NewRow RowDatos.Item("CompanyName") = "Serpost Peru" RowDatos.Item("Phone") = "(511) 555-5555" ds.Tables("Shippers").Rows.Add(RowDatos) Dim custCB As SqlCommandBuilder = New SqlCommandBuilder(da) da.Update(ds, "Shippers") It inserts a row in the Shippers Table, the ShippersID is a Indentity value. My question is how can i retrieve the Identity value generated when the new row is inserted in the Shippers table. I have done several web searches and the sources I've seen on the net don't answer it speccifically or go on to talk about stored procedures. Any help would be appreciated. Thanks!

    Read the article

  • Checking if a boolean column is true in MySQL/Rails

    - by Pygmalion
    Rails and MySQL: I have a table with several boolean columns representing tags. I want to find all the rows for which a specific one of these columns is 'true' (or I guess in the case of MySQL, '1'). I have the following code in my view. @tag = params[:tag] @supplies = Supply.find(:all, :conditions=>["? IS NOT NULL and ? !=''", @tag, @tag], :order=>'name') The @tag is being passed in from the url. Why is it then that I am instead getting all of my @supplies (i.e. every row) rather than just those that are true for the column for @tag. Thanks!

    Read the article

  • SQL 2008 Select Top 1000 and update the selected database drop-down

    - by CWinKY
    When you right click and do a Select top 1000 rows from a table in sql 2008, it opens a tab and writes the sql and then executes it. This is okay, however I'll erase the sql and use the same tab often to do other sql statements. What annoys me is that I have to go to the database drop-down at the top of the window and change it to the current database I'm in because it says Master. How can I make sql 2008 update the selected database for this tab automatically when I right click a table and do select top 1000? On a side note, can I automatically hide the select statement that it generates and just show grid of results?

    Read the article

  • Using Linq on a Dataset

    - by JasonMHirst
    Can someone enlighthen me with regards to Linq please? I have a dataset that is populated via a SQL Stored Procedure, the format of which is below: Country | Brand | Variant | 2004 | 2005 | 2006 | 2007 | 2008 The number of rows varies between 50 and several thousand. What I'm trying to do is use Linq to interrogate the dataset (there will be several Linq queries based on user options), but a simple example would be to SUM the year columns based on Brand. I have the following that I believe creates a template for me to work with: But from here on I'm absolutely stuck! sqlDA.Fill(ds, "Profiler") Dim brandsQuery = From cust In ds.Tables(0).AsEnumerable() Select _BrandName = cust.Item("BrandName"), _y0 = cust.Item("1999"), _y1 = cust.Item("2004"), _y2 = cust.Item("2005"), _y3 = cust.Item("2006"), _y4 = cust.Item("2007"), _y5 = cust.Item("2008") I'm tried to look at examples, but can't see any that are VB.Net based and/or show me how to Sum/Group. Can someone please provide an example so I can perhaps learn from it. Thanks.

    Read the article

  • INSERT SELECT Statement and Rollback SQL

    - by Juan Perez
    Im Working on a creation of a query who uses INSERT SELECT statement using MS SQL Server 2008: INSERT INTO TABLE1 (col1, col2) SELECT col1, col2 FROM TABLE2 Right now the excecution of this query is inside a transaction: Pseudocode: try { begin transaction; query; commit; } catch { rollback; } If TABLE2 has around 40m of rows, at the moment of making the insert on the TABLE1, if there is an error in the middle of the INSERT, will the INSERT SELECT statement make a rollback itself or I need to use a transaction to preserve data integrity? It is necessary to use a transaction? or SQL SERVER it self uses a transaction for this type of sentences.

    Read the article

  • Query filter design for string field

    - by Midhat
    A field in my table can have arbitrary strings. On the UI, there is a drop down having options like All, Value1, Value2 And the results were filtered by the selected option value. So far this is easy and adding new filters to the UI is not a problem. Needs no changes in my stored procedure. Now I want to have an "Others" option here as well, which will return rows not having the column value as Value1 or Value2. Apparently this will require a "not in" operator in my query, and will make maintenance difficult, as the list of values is likely to change Any suggestions, design tips?

    Read the article

  • Form Field: How do I change the background on blur?

    - by Liso22
    I managed to remove the background when the user clicks on the field but I cannot restore it when it blurs! This is the field: <textarea class="question-box" style="width: 240px; background: white url('http://chusmix.com/Imagenes/contawidget.png') no-repeat 50% 50%; color: grey;" cols="12" rows="5" id="question-box-' . $questionformid . '" name="title" onblur="if(this.value == '') { this.style.color='#848484'; this.value=''this.style.background=' white url('http://chusmix.com/Imagenes/contawidget.png') no-repeat 50% 50%;e';}" onfocus="if (this.value == '') {this.style.color='#444'; this.style.background='none';}" type="text" maxlength="200" size="28"></textarea> Anyone knows what I'm doing wrong?? Thanks

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • Textbox is disabled after adding text dynamically in codebehind

    - by user1761348
    Can't quite work out what is happening but The background is that I am dynamically adding table rows in a web page and some of the cells hold controls such as dropdowns etc. One of the columns pulls a size into it. However the column I am having issues with is the next column which takes the text from teh previous dropdown and shows the appropriate price. On doing htis however the textbox which is being created appears to turn into a label as I cannot select or adjust the text that has been put in there. var gotPrice =(from a in getPrice.Sizes where a.Size1 == Size.SelectedValue select a).First(); TextBox Price = new TextBox(); Price.Width = 100; PriceField.Controls.Add(Price); PriceField.Text = gotPrice.RackRate.ToString(); I have tried then calling .Enabled but still the text box is not editable. any help appreciated.

    Read the article

  • k-means clustering in R on very large, sparse matrix?

    - by movingabout
    Hello, I am trying to do some k-means clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a couple of "1" values per row). The whole thing does not fit into memory, so I converted it into a sparse ARFF file. But R obviously can't read the sparse ARFF file format. I also have the data as a plain CSV file. Is there any package available in R for loading such sparse matrices efficiently? I'd then use the regular k-means algorithm from the cluster package to proceed. Many thanks

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >