Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 239/293 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Encoding MySQL text fields into UTF-8 text files - problems with special characters

    - by Matt Andrews
    I'm writing a php script to export MySQL database rows into a .txt file formatted for Adobe InDesign's internal markup. Exports work, but when I encounter special characters like é or umlauts, I get weird symbols (eg Chloë Hanslip instead of Chloë Hanslip). Rather than run a search and replace for every possible weird character, I need a better method. I've checked that when the text hits the database, it's saved properly - in the database I see the special characters. My export code basically runs some regular expressions to put in the InDesign code tags, and I'm left with the weird symbols. If I just output the text to the browser (rather than prompt for a text file download), it displays properly. When I save the file I use this code: header("Content-disposition: attachment; filename=test.txt"); header("Content-Type: text/plain; charset=utf-8"); I've tried various combinations of utf8_encode() and iconv() to no avail. Can anybody point me in the right direction here?

    Read the article

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

  • Toggle row visibility one at a time

    - by kuswantin
    I have a couple of tables with similar structures like this: <table> <tbody> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> ..etc --- The fake button is added here <div class="addrow">Add another</div> </tbody> </table> Since this is a long list, I have a need to toggle the rows one at a time. I just need to show the first row, of course, the rest should be toggled. The action is when I click a dynamic fake button, it will show row no. 2, and clicking again will show another next row. This is what I have done so far: $("table#field_fruit_values tr.draggable").not(':first').hide(); $("table#field_vegetables_values tr.draggable").not(':first').hide(); $("body.form table.content-multiple-table tbody").append('<div class="addrow">Add</div>'); $(".addrow").click(function() { var hiddenRow = $(this).prev('tr.draggable'); $(this).prev(hiddenRow + 1).show(); //if (hiddenRow + ':last').length) { // <= silly logic // $(this).hide(); //} }); The button only works for one row. I must have done something wrong :) When the final is reached, I also want the button to disappear. Sorry if this question sound silly. Any help would be very much appreciated. Thanks.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • TableAdapter.Update not working

    - by Wesley
    Here is my function: private void btnSave_Click(object sender, EventArgs e) { wO_FlangeMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_FlangeMillBundles); wO_HeadMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_HeadMillBundles); wO_WebMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_WebMillBundles); int rowsaffected = wO_MillTableAdapter.Update(invClerkDataDataSet.WO_Mill); MessageBox.Show(invClerkDataDataSet.WO_Mill.Rows[0]["GasReading"].ToString()); MessageBox.Show(rowsaffected.ToString()); } You can see the fourth update in the function uses the same functionality as the rest, I just have some debugging stuff added. The first three tables are bound to DataGridViews and work fine. The fourth table has it's members bound to various text boxes. When I change the value in the text box bound to the GasReading column and click save the first MessageBox does in fact show the new value, so it's making it into the dataset correctly. However, the rowsaffected is always showing 0 and the value in the actual database is not being updated. Can anyone see my problem? I understand that the problem must be elsewhere in my code since the four update methods are the same, but I just don't know where to start.

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Copy whole SQL Server database into JSON from Python

    - by Oli
    I facing an atypical conversion problem. About a decade ago I coded up a large site in ASP. Over the years this turned into ASP.NET but kept the same database. I've just re-done the site in Django and I've copied all the core data but before I cancel my account with the host, I need to make sure I've got a long-term backup of the data so if it turns out I'm missing something, I can copy it from a local copy. To complicate matters, I no longer have Windows. I moved to Ubuntu on all my machines some time back. I could ask the host to send me a backup but having no access to a machine with MSSQL, I wouldn't be able to use that if I needed to. So I'm looking for something that does: db = {} for table in database: db[table.name] = [row for row in table] And then I could serialize db off somewhere for later consumption... But how do I do the table iteration? Is there an easier way to do all of this? Can MSSQL do a cross-platform SQLDump (inc data)? For previous MSSQL I've used pymssql but I don't know how to iterate the tables and copy rows (ideally with column headers so I can tell what the data is). I'm not looking for much code but I need a poke in the right direction.

    Read the article

  • Full Text Search in SQL Server 2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with SQL Server 2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQL Server cannot find it. So I try to investigate it by trying the following query in SQL Server: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • C# and SQL - sub select from a table parameter

    - by Dr.HappyPants
    Below is a code snippet for passing a table as a parameter to a query that can be used in Sql Server 2008. I'm confused though about the "SELECT id.custid FROM @custids id". Why does it use id.custid and @custids id...? private static void datatable_example() { string [] custids = {"ALFKI", "BONAP", "CACTU", "FRANK"}; DataTable custid_list = new DataTable(); custid_list.Columns.Add("custid", typeof(String)); foreach (string custid in custids) { DataRow dr = custid_list.NewRow(); dr["custid"] = custid; custid_list.Rows.Add(dr); } using(SqlConnection cn = setup_connection()) { using(SqlCommand cmd = cn.CreateCommand()) { cmd.CommandText = @"SELECT C.CustomerID, C.CompanyName FROM Northwind.dbo.Customers C WHERE C.CustomerID IN (SELECT id.custid FROM @custids id)"; cmd.CommandType = CommandType.Text; cmd.Parameters.Add("@custids", SqlDbType.Structured); cmd.Parameters["@custids"].Direction = ParameterDirection.Input; cmd.Parameters["@custids"].TypeName = "custid_list_tbltype"; cmd.Parameters["@custids"].Value = custid_list; using (SqlDataAdapter da = new SqlDataAdapter(cmd)) using (DataSet ds = new DataSet()) { da.Fill(ds); PrintDataSet(ds); } } }

    Read the article

  • Flex - Increase timeout on a PHP service function call

    - by Travesty3
    I'm using Flash Builder 4 Beta 2. I have it connecting to a PHP service. The way I set this up was using the wizard, so I didn't actually write the code to connect to it. The service looks like this: package services.flash { import mx.rpc.AsyncToken; import com.adobe.fiber.core.model_internal; import mx.rpc.AbstractOperation; import valueObjects.CustomDatatype8; import valueObjects.NewUsageData; import mx.collections.ItemResponder; import mx.rpc.remoting.RemoteObject; import mx.rpc.remoting.Operation; import com.adobe.fiber.services.wrapper.RemoteObjectServiceWrapper; import com.adobe.fiber.valueobjects.AvailablePropertyIterator; import com.adobe.serializers.utility.TypeUtility; [ExcludeClass] internal class _Super_FLASH extends RemoteObjectServiceWrapper { // Constructor public function _Super_FLASH() { // initialize service control _serviceControl = new RemoteObject(); var operations:Object = new Object(); var operation:Operation; operation = new Operation(null, "sendCommand"); operation.resultType = Object; operations["sendCommand"] = operation; ... } } One of the functions that I'm calling fetches users from a MySQL database. There are about 30,000 users right now. The service seems to timeout when fetching more than around 22,000 rows, I get the "Channel Disconnected before an acknowledgement was received" error. If I call the PHP script from a browser, it fetches them all with no problems at all, however. I have tried increasing the timeout in the PHP script (which didn't work), but obviously this isn't the problem since the browser is able to pull them up with no problems. Is there a way to increase the timeout of the PHP service in Flash Builder? I'm a bit of a noob when it comes to Flash, so please be descriptive. Thanks in advance!

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • DataTable vs. Collection in .Net

    - by B Pete
    I am writing a program that needs to read a set of records that describe the register map of a device I need to communicate with. Each record will have a handfull of fields that describe the properties of each register. I don't really need to edit or modify the data in my VB or C# program, though I would like to be able to display the data on a grid. I would like to store the data in a CSV file, or perhaps an XML file. I need to enable users to edit the data off-line, preferably in excel. I am considering using a DataTable or a Collection of "Register" objects (which I would define). I prototyped a DataTable, and found I can read/write XML easily using the built in methods and I can easily bind to a DataGridView. I was not able to find a way to retreive info on a single register without using a query that returns a collection of rows, even though I defined a unique primaty key column. The syntax to get a value from a column is also complex, though I could be missing something on both counts. I'm tempted to use a collection of "Register" objects that I can access via a unique key. It would be a little more coding up front, but seems like a cleaner solution overall. I should still be able to use LINQ to dataset to query subsets of registers when I need them, but would also be able to grab a single field using a the key value, something like this: Registers(keyValue).fieldName). Which would be a cleaner approach to the problem? Is there a way to read/write XML into a Collection without needing custom code? Could this be accomplished using String for a key?

    Read the article

  • javascript to reference input IDs in php loop and pass values back to same input ID

    - by Smudger
    I have a form which is essentially an autocomplete input box. I can get this to work perfectly for a single text box. What I need to do is create unique input boxes by using a php for loop. This will use the counter $i to give each input box a unique name and id. The problem is that the unique value of the input box needs to be passed to the javascript. this will then pass the inputted data to the external php page and return the mysql results. As mentioned I have this working for a single input box but need assistance with passing the correct values to the javascript and returning it to correct input box. existing code for working solution (first row works only, all other rows update first row input box) <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function lookup(inputString) { if(inputString.length == 0) { // Hide the suggestion box. $('#suggestions').hide(); } else { $.post("autocompleteperson.php", {queryString: ""+inputString+""}, function(data){ if(data.length >0) { $('#suggestions').show(); $('#autoSuggestionsList').html(data); } }); } } // lookup function fill(thisValue) { $('#inputString').val(thisValue); setTimeout("$('#suggestions').hide();", 200); } </script> </head> <body> <form name="form1" id="form1" action="addepartment.php" method="post"> <table> <? for ( $i = 1; $i <=10; $i++ ) { ?> <tr> <td> <? echo $i; ?></td> <td> <div> <input type="text" size="30" value="" id="inputString" onkeyup="lookup(this.value);" onblur="fill();" /> </div> <div class="suggestionsBox" id="suggestions" style="display: none;"> <img src="upArrow.png" style="position: relative; top: -12px; left: 20px;" alt="upArrow" /> <div class="suggestionList" id="autoSuggestionsList"> &nbsp; </div> </div> </td> </tr> <? } ?> </table> </body> </html> code for autocompleteperson.php is: $query = $db->query("SELECT fullname FROM Persons WHERE fullname LIKE '$queryString%' LIMIT 10"); if($query) { while ($result = $query ->fetch_object()) { echo '<li onClick="fill(\''.$result->fullname.'\');">'.$result->fullname.'</li>'; } } in order to get it to work for all rows, I include the counter $i in each of the file names as below: <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function lookup(inputString<? echo $i; ?>) { if(inputString<? echo $i; ?>.length == 0) { // Hide the suggestion box. $('#suggestions<? echo $i; ?>').hide(); } else { $.post("autocompleteperson.php", {queryString: ""+inputString<? echo $i; ?>+""}, function(data){ if(data.length >0) { $('#suggestions<? echo $i; ?>').show(); $('#autoSuggestionsList<? echo $i; ?>').html(data); } }); } } // lookup function fill(thisValue) { $('#inputString<? echo $i; ?>').val(thisValue); setTimeout("$('#suggestions<? echo $i; ?>').hide();", 200); } </script> </head> <body> <form name="form1" id="form1" action="addepartment.php" method="post"> <table> <? for ( $i = 1; $i <=10; $i++ ) { ?> <tr> <td> <? echo $i; ?></td> <td> <div> <input type="text" size="30" value="" id="inputString<? echo $i; ?>" onkeyup="lookup(this.value);" onblur="fill();" /> </div> <div class="suggestionsBox" id="suggestions<? echo $i; ?>" style="display: none;"> <img src="upArrow.png" style="position: relative; top: -12px; left: 20px;" alt="upArrow" /> <div class="suggestionList" id="autoSuggestionsList<? echo $i; ?>"> &nbsp; </div> </div> </td> </tr> <? } ?> </table> </body> </html> The autocomplete suggestion works (although always shown on first row) but when selecting the data always returns to row 1, even if input was on row 5. I have a feeling this has to do with the fill() but not sure? is it due the the autocomplete page code? or does the fill referencing ThisValue have something to do with it. example of this page (as above) can be found here Thanks for the assistance, much appreciated. UPDATE - LOLO <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function lookup(i, inputString) { if(inputValue.length == 0) { // Hide the suggestion box. $('#suggestions' + i).hide(); } else { $.post("autocompleteperson.php", {queryString: ""+inputString+""}, function(data){ if(data.length >0) { $('#suggestions' + i).show(); $('#autoSuggestionsList' + i).html(data); } }); } } // lookup function fill(i, thisValue) { $('#inputString' + i).val(thisValue); setTimeout("$('#suggestions' + i).hide();", 200); } </script> </head> <body> <form name="form1" id="form1" action="addepartment.php" method="post"> <table> <? for ( $i = 1; $i <=10; $i++ ) { ?> <tr> <td> <? echo $i; ?></td> <td> <div> <input type="text" size="30" value="" id="inputString<?php echo $i; ?>" onkeyup="lookup(this.value);" onblur="fill();" /> </div> <div class="suggestionsBox" id="suggestions<? echo $i; ?>" style="display: none;"> <img src="upArrow.png" style="position: relative; top: -12px; left: 20px;" alt="upArrow" /> <div class="suggestionList" id="autoSuggestionsList<? echo $i; ?>"> &nbsp; </div> </div> </td> </tr> <? } ?> </table> </body> </html> google chrome catched the following errors: first line on input, second line on loosing focus

    Read the article

  • Is there a way to optimize this mysql query...?

    - by SpikETidE
    Hi Everyone... Say, I got these two tables.... Table 1 : Hotels hotel_id hotel_name 1 abc 2 xyz 3 efg Table 2 : Payments payment_id payment_date hotel_id total_amt comission p1 23-03-2010 1 100 10 p2 23-03-2010 2 50 5 p3 23-03-2010 2 200 25 p4 23-03-2010 1 40 2 Now, I need to get the following details from the two tables Given a particular date (say, 23-03-2010), the sum of the total_amt for each of the hotel for which a payment has been made on that particular date. All the rows that has the date 23-03-2010 ordered according to the hotel name A sample output is as follows... +------------+------------+------------+---------------+ | hotel_name | date | total_amt | commission | +------------+------------+------------+---------------+ | * abc | 23-03-2010 | 140 | 12 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p1 | 23-03-2010 | 100 | 10 || |+-----------+------------+------------+--------------+| || p4 | 23-03-2010 | 40 | 2 || |+-----------+------------+------------+--------------+| +------------+------------+------------+---------------+ | * xyz | 23-03-2010 | 250 | 30 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p2 | 23-03-2010 | 50 | 5 || |+-----------+------------+------------+--------------+| || p3 | 23-03-2010 | 200 | 25 || |+-----------+------------+------------+--------------+| +------------------------------------------------------+ Above the sample of the table that has to be printed... The idea is first to show the consolidated detail of each hotel, and when the '*' next to the hotel name is clicked the breakdown of the payment details will become visible... But that can be done by some jquery..!!! The table itself can be generated with php... Right now i am using two separate queries : One to get the sum of the amount and commission grouped by the hotel name. The next is to get the individual row for each entry having that date in the table. This is, of course, because grouping the records for calculating sum() returns only one row for each of the hotel with the sum of the amounts... Is there a way to combine these two queries into a single one and do the operation in a more optimized way...?? Hope i am being clear.. Thanks for your time and replies...

    Read the article

  • Excel VBA SQL Import

    - by user307655
    Hi All, I have the following code which imports data from a spreadsheet to SQL directly from Excel VBA. The code works great. However I am wondering if somebody can help me modify the code to: 1) Check if data from column A already exists in the SQL Table 2) If exists, then only update rather than import as a new role 3) if does not exist then import as a new role. Thanks again for your help Sub SQLIM() ' Send data to SQL Server ' This code loads data from an Excel Worksheet to an SQL Server Table ' Data should start in column A and should be in the same order as the server table ' Autonumber fields should NOT be included' ' FOR THIS CODE TO WORK ' In VBE you need to go Tools References and check Microsoft Active X Data Objects 2.x library Dim Cn As ADODB.Connection Dim ServerName As String Dim DatabaseName As String Dim TableName As String Dim UserID As String Dim Password As String Dim rs As ADODB.Recordset Dim RowCounter As Long Dim ColCounter As Integer Dim NoOfFields As Integer Dim StartRow As Long Dim EndRow As Long Dim shtSheetToWork As Worksheet Set shtSheetToWork = ActiveWorkbook.Worksheets("Sheet1") Set rs = New ADODB.Recordset ServerName = "WIN764X\sqlexpress" ' Enter your server name here DatabaseName = "two28it" ' Enter your database name here TableName = "COS" ' Enter your Table name here UserID = "" ' Enter your user ID here ' (Leave ID and Password blank if using windows Authentification") Password = "" ' Enter your password here NoOfFields = 7 ' Enter number of fields to update (eg. columns in your worksheet) StartRow = 2 ' Enter row in sheet to start reading records EndRow = shtSheetToWork.Cells(Rows.Count, 1).End(xlUp).Row ' Enter row of last record in sheet ' CHANGES ' Dim shtSheetToWork As Worksheet ' Set shtSheetToWork = ActiveWorkbook.Worksheets("Sheet1") '** Set Cn = New ADODB.Connection Cn.Open "Driver={SQL Server};Server=" & ServerName & ";Database=" & DatabaseName & _ ";Uid=" & UserID & ";Pwd=" & Password & ";" rs.Open TableName, Cn, adOpenKeyset, adLockOptimistic For RowCounter = StartRow To EndRow rs.AddNew For ColCounter = 1 To NoOfFields rs(ColCounter - 1) = shtSheetToWork.Cells(RowCounter, ColCounter) Next ColCounter Next RowCounter rs.UpdateBatch ' Tidy up rs.Close Set rs = Nothing Cn.Close Set Cn = Nothing End Sub

    Read the article

  • How do I select a row in a table based on what ddl option is selected MVC?

    - by user54197
    I have a table with a few rows of data. I would like to display a row based on what option is selected on the ddl. how do I do that? <script type="text/javascript" language="javascript"> function optionSelected() { alert('HELP!!'); } </script> ... <select id="optionSelect" onchange="optionSelected()"> <option id="1">1</option> <option id="2">2</option> <option id="3">3</option> </select> <br /> <table id="optionList"> <tr><td id="1">Option 1 Selected</td></tr> <tr><td id="2">Option 2 Selected</td></tr> <tr><td id="3">Option 3 Selected</td></tr> </table>

    Read the article

  • Passing row from UIPickerView to integer CoreData attribute

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --EDIT 1-- Ok, so I can get rid of the errors by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber?

    Read the article

  • Mysqli results memory usage

    - by Poe
    Why is the memory consumption in this query continuing to rise as the internal pointer progresses through loop? How to make this more efficient and lean? $link = mysqli_connect(...); $result = mysqli_query($link,$query); // 403,268 rows in result set while ($row = mysqli_fetch_row($result)) { // print time, (get memory usage), -- row number } mysqli_free_result(); mysqli_close($link); /* 06:55:25 (1240336) -- Run query 06:55:26 (39958736) -- Query finished 06:55:26 (39958784) -- Begin loop 06:55:26 (39960688) -- Row 0 06:55:26 (45240712) -- Row 10000 06:55:26 (50520712) -- Row 20000 06:55:26 (55800712) -- Row 30000 06:55:26 (61080712) -- Row 40000 06:55:26 (66360712) -- Row 50000 06:55:26 (71640712) -- Row 60000 06:55:26 (76920712) -- Row 70000 06:55:26 (82200712) -- Row 80000 06:55:26 (87480712) -- Row 90000 06:55:26 (92760712) -- Row 100000 06:55:26 (98040712) -- Row 110000 06:55:26 (103320712) -- Row 120000 06:55:26 (108600712) -- Row 130000 06:55:26 (113880712) -- Row 140000 06:55:26 (119160712) -- Row 150000 06:55:26 (124440712) -- Row 160000 06:55:26 (129720712) -- Row 170000 06:55:27 (135000712) -- Row 180000 06:55:27 (140280712) -- Row 190000 06:55:27 (145560712) -- Row 200000 06:55:27 (150840712) -- Row 210000 06:55:27 (156120712) -- Row 220000 06:55:27 (161400712) -- Row 230000 06:55:27 (166680712) -- Row 240000 06:55:27 (171960712) -- Row 250000 06:55:27 (177240712) -- Row 260000 06:55:27 (182520712) -- Row 270000 06:55:27 (187800712) -- Row 280000 06:55:27 (193080712) -- Row 290000 06:55:27 (198360712) -- Row 300000 06:55:27 (203640712) -- Row 310000 06:55:27 (208920712) -- Row 320000 06:55:27 (214200712) -- Row 330000 06:55:27 (219480712) -- Row 340000 06:55:27 (224760712) -- Row 350000 06:55:27 (230040712) -- Row 360000 06:55:27 (235320712) -- Row 370000 06:55:27 (240600712) -- Row 380000 06:55:27 (245880712) -- Row 390000 06:55:27 (251160712) -- Row 400000 06:55:27 (252884360) -- End loop 06:55:27 (1241264) -- Free */

    Read the article

  • How to give the menus associated with dojo's dijit.ComboBox different css from those of dijit.Menu

    - by sprugman
    When you use a dijit.ComboBox, the type ahead suggestions get implemented as a dijit.Menu. I've got a design which calls for the matched portion of the suggestion rows to be normal, and the unmatched portion to be bold. The structure that dojo creates is like this: <ul class="dijitReset dijitMenu"> <li role="option" class="dijitReset dijitMenuItem"> <span class="dijitComboBoxHighlightMatch">Ch</span>oice One </li> <li role="option" class="dijitReset dijitMenuItem"> <span class="dijitComboBoxHighlightMatch">Ch</span>oice Two </li> </ul> So I can target the matched part, but not the unmatched part. So my css needs to be something like: .dijitMenuItem { font-weight: bold; } .dijitMenuItem .dijitComboBoxHighlightMatch { font-weight: normal; } The problem is, if I do that, all menus will be bolded, and I don't want that. Just doing something like this: <select dojoType="dijit.form.ComboBox" class="foobar">[options]</select> puts the foobar class in the ComboBox, but the menu is an independent node not under that hierarchy. What's the easiest way to add a css class around the popup menu that the ComboBox generates?

    Read the article

  • Windows Form color changing

    - by Sef
    Hello, So i am attempting to make a MasterMind program as sort of exercise. Field of 40 picture boxes (line of 4, 10 rows) 6 buttons (red, green, orange, yellow, blue, purple) When i press one of these buttons (lets assume the red one) then a picture box turns red. My question is how do i iterate trough all these picture boxes? I can get it to work but only if i write : And this is offcourse no way to write this, would take me countless of lines that contain basicly the same. private void picRood_Click(object sender, EventArgs e) { UpdateDisplay(); pb1.BackColor = System.Drawing.Color.Red; } Press the red button - first picture box turns red Press the blue button - second picture box turns blue Press the orange button - third picture box turns orange And so on... Ive had a previous similar program that simulates a traffic light, there i could assign a value to each color (red 0, orange 1, green 2). Is something similar needed or how exactly do i adress all those picture boxes and make them correspond to the proper button. Best Regards.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >