Search Results

Search found 5233 results on 210 pages for 'records'.

Page 44/210 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Process results of conditional split in SSIS

    - by Robert
    I have a Data Flow Task and am connecting to a database via an OLE DB Source component to extract data. This data feeds into a Conditional Split component to separate the data based on a simple expression. After the evaluation of this expression, the data will end up in either of two locations: LocationA or LocationB. Alright, I have that all set up and working properly. Once the data is separated into these two locations, additional processing is to be done on the records. Here's where I am stuck: I need the the processing of records in LocationA to occur before the processing of records in LocationB. Is there a way to set precedence of which tasks occur before others? If not, what is the best way to handle this? I was thinking I may need to write the data in LocationA and LocationB back out to the database and create a new data flow task in the control flow to handle the order of which these records must be dealt with. Any help is greatly appreciated!

    Read the article

  • Select a distinct record, filtering is not working..

    - by help_inmssql
    Hello EVery I am new to SQl. query to result in the following records. I have a table with records as c1 c2 c3 c4 c5 c6 1 John 2.3.2010 12:09:54 4 7 99 2 mike 2.3.2010 13:09:59 8 6 88 3 ahmad 2.3.2010 13:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 5 run 23.3.2010 12:09:54 3 8 12 I want to fecth only records. i.e only 1 latest record per day. If both of them happen at the same time, sort by C1.so in 1&3 it should fetch 3. 3 ahmad 2.3.2010 14:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 I have got a new problem in this. If i filter the records based on conditions the last record is missing. I tried many ways but still it is failing. Here update_log is my table. SELECT * FROM update_log t1 WHERE (t1.c3) = ( SELECT MAX(t2.c3) FROM update_log t2 WHERE DATEDIFF(dd,t2.c3, t1.c3) = 0 ) and t1.c3 > '02.03.2010' and t1.modified_at <= '22.03.2010' ORDER BY t1.c3 ASC. But i am not able to retrieve the record 4 Jim 23.3.2010 16:35:14 4 5 99 I dont know this query results in only 3 ahmad 2.3.2010 14:09:59 1 9 19 The format of the column c3 is datetime. I am pumping the data into the column as using $date = date("d.m.Y H:i",time()); -- simple date fetch of today. Another query that i tried for the same purpose. select * from (select convert(varchar(10), c3,104) as date, max(c3) as max_date, max(c1) as Nr from update_log group by convert(varchar(10), c3,104)) as t2 inner join update_log as t1 on (t2.max_date = t1.c3 and convert(varchar(10), c3,104) = date and t1.[c1]= Nr) WHERE t1.c3 >= '02.03.2010' and t1.c3 <= '16.04.2010' . I even tried this way..the same error last record is not coming..

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Easy way to observe user activity - how improve my database structure.

    - by Thomas
    Welcome, I need some advise to improve perfomence my web application. In the begin I had this structure of database: USER -id (Primary Key) -name -password -email .... PROFILE -user Primary Key, Foreign Key (USER) -birthday -region -photoFile ... PAGES -id (Primary Key) -user Foreign Key(USER) -page -date COMMENTS -id (Primary Key) -user Foreign Key(USER) -page Foreign Key(PAGE) -comment -date FAVOURITES_PAGES -id (Primary Key) -user Foreign Key(USER) -favourite_page Foreign Key(PAGE) -date but now one of the most important page of website is observatory, when everyone can observe activity others users. So I need select all pages, comments and favourites pages some users and display it in one list, sorted by date. For better perfomance (I think) I changed my structure to this: table USER and PROFILE without changes ACTIVITY (additional table- have common fields: user,date) -id (Primary Key) -user Foreign Key(USER) -date -page Foreign Key(PAGE) -comment Foreign Key(COMMENTS) -favourite_page Foreign Key(FAVOURITES_PAGES) PAGES -id (Primary Key) -page COMMENTS -id (Primary Key) -page Foreign Key(PAGE) -comment FAVOURITES_PAGES -id (Primary Key) -favourite_page Foreign Key(PAGE) So now it is very easy get sorted records from all tables. But I have no only foreign key to PAGES, COMMENTS and FAVOURITES_PAGES in ACTIVITY table - there is about ten Foreign Key fields and in one record only one have value, others have None: ACTIVITY id user date page comment ... 1 2 2010-02-23 None 1 2 1 2010-02-21 1 None .... It is corect solution. When I display about 40 records in one page (pagination) I must wait about one secound, but database is almost emty (a few users and about 100 records in others tables). It is depends on amount records per page - I have checked it, but why it takes too long time, becouse of relationships? The website is built in Python/Django. Any advices/opinion?

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Is it possible to search locally in jqGrid with treeGrid installed

    - by Nehu
    I am using jqGrid with treeGrid. I have added a filterToolbar. I would like to search locally instead of having a server call. The treegrid docs say that, "When we initialize the grid and the data is read, the datatype is automatically set to local." So, is it possible to implement local search with treeGrid. I tried the below configuration, but it is resulting in server calls. My Configuration is var grid = $("#grid").jqGrid({ treeGrid: true, treeGridModel: 'adjacency', ExpandColumn: 'businessAreaName', ExpandColClick : true, url:'agileProgramme/records.do', datatype: 'json', mtype: 'GET', colNames:['Id' , 'Business Area' , 'Investment' , 'Org' , 'Goal' ], colModel:[ /*00*/ {name:'agileProgrammeId',index:'agileProgrammeId', width:0, editable:false,hidden:true}, /*01*/ {name:'businessAreaName',index:'businessAreaName', width:160, editable:false}, /*02*/ {name:'programmeName',index:'programmeName', width:150, editable:false, classes:'link'}, /*03*/ {name:'org',index:'org', width:50, editable:false, classes:'orgHierarchy', sortable : false}, /*04*/ {name:'goal',index:'goal', width:70, editable:false} ], treeReader : { level_field: "level", parent_id_field: "parent", leaf_field: "leaf", expanded_field: "expanded" }, autowidth: true, height: 240, pager: '#pager', sortname: 'id', sortorder: "asc", toolbar:[true,"top"], caption:"TableGridDemo", emptyrecords: "Empty records", jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: false, cell: "cell", id: "agileProgrammeId" } }); And to implement the search toolbar $('#grid').jqGrid('filterToolbar', {stringResult: true,searchOnEnter : true}); Would appreciate any help or any pointer on even if it is possible?

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • How do I sort an internationalized i18n table with symfony and doctrine?

    - by Maurizio
    I would like to display a list of records from an internationalized table using sfDoctrinePager. Not all the records have been translated to all the languages supported by the application, so I had to implement a fallback mechanism for some fields (by overriding the getFoo() function in the Bar.class.php, as explained in another post here). I have different fallback list for each culture. Everything works fine until when it comes to sorting the records in alphabetical order. I'm sorting the records at the SQL (Dql) level, by adding an -orderBy('t.name') to the query: $q = Doctrine::getTable('Foo') ->createQuery('f') ->leftJoin('f.Translation t') ->orderBy('t.name') But here come the troubles: the list gets not sorted correctly, regardless of the active culture. I get rather better results when I limit the translations to the active culture, like this: ->leftJoin('f.Translation t WITH lang = ?', $request->getParameter('sf_culture'); Then the sorting is correct, as far as all the translations exist for the active culture. If a translation does not exist and I have to take the name from the fallback language, the record will be displayed at the very beginning of the list (I understand this happens because the value for the current culture is null). My question is: is there a best practice for getting internationalized fields (needing fallbacks) sorted correctly with doctrine and sfDoctrinePager? Thank you in advance.

    Read the article

  • TSQL to insert an ascending value

    - by David Neale
    I am running some SQL that identifies records which need to be marked for deletion and to insert a value into those records. This value must be changed to render the record useless and each record must be changed to a unique value because of a database constraint. UPDATE Users SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 FROM Users a LEFT OUTER JOIN #ADUSERS b ON a.Username = 'AVSOMPOL\' + b.sAMAccountName WHERE (b.sAMAccountName is NULL AND a.Username LIKE 'AVSOMPOL%') OR b.userAccountControl = 514 This is the important bit: SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 What I've tried to do is have deleted records have their Username field set to 'Deletedxxx'. The ISNULL is needed because there may be no records matching the SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%' statement and this will return NULL. I get a syntax error when trying to parse this (Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'SELECT'. Msg 102, Level 15, State 1, Line 2 Incorrect syntax near ')'. I'm sure there must be a better way to go about this, any ideas?

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

  • slow SQL command

    - by Retrocoder
    I need to take some data from one table (and expand some XML on the way) and put it in another table. As the source table can have thousands or records which caused a timeout I decided to do it in batches of 100 records. The code is run on a schedule so doing it in batches works ok for the customer. If I have say 200 records in the source database the sproc runs very fast but if there are thousands it takes several minutes. I'm guessing that the "TOP 100" only takes the top 100 after it has gone through all the records. I need to change the whole code and sproc at some point as it doesn't scale but for now is there a quick fix to make this run quicker ? INSERT INTO [deviceManager].[TransactionLogStores] SELECT TOP 100 [EventId], [message].value('(/interface/mac)[1]', 'nvarchar(100)') AS mac, [message].value('(/interface/device) [1]', 'nvarchar(100)') AS device_type, [message].value('(/interface/id) [1]', 'nvarchar(100)') AS device_id, [message].value('substring(string((/interface/id)[1]), 1, 6)', 'nvarchar(100)') AS store_id, [message].value('(/interface/terminal/unit)[1]', 'nvarchar(100)') AS unit, [message].value('(/interface/terminal/trans/event)[1]', 'nvarchar(100)') AS event_id, [message].value('(/interface/terminal/trans/data)[1]', 'nvarchar(100)') AS event_data, [message].value('substring(string((/interface/terminal/trans/data)[1]), 9, 11)', 'nvarchar(100)') AS badge, [message].value('(/interface/terminal/trans/time)[1]', 'nvarchar(100)') AS terminal_time, MessageRecievedAt_UTC AS db_time FROM [deviceManager].[TransactionLog] WHERE EventId > @EventId --WHERE MessageRecievedAt_UTC > @StartTime AND MessageRecievedAt_UTC < @EndTime ORDER BY terminal_time DESC

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Javascript when to show results

    - by Pete
    This is my Javascript below I want to show records on load and also show new records when added to the database showrecords(); displays the records in the database where abouts can I put this in my code where it will work correctly. `$(document).ready(function() { //showrecords() function showrecords() { $.ajax({ type: "POST", url: "demo_show.php", cache: false, success: function(html){ $("#display").after(html); document.getElementById('content').value=''; $("#flash").hide(); } }); } $(".comment_button").click(function() { var element = $(this); var test = $("#content").val(); var dataString = 'content='+ test; if(test=='') { alert("Please Enter Some Text"); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="http://tiggin.com/ajax-loader.gif" align="absmiddle">&nbsp;<span class="loading">Loading Comment...</span>'); $.ajax({ type: "POST", url: "demo_insert.php", data: dataString, cache: false, success: function(html){ // $("#display").after(html); document.getElementById('content').value=''; $("#flash").hide(); //Function for showing records //showrecords(); } }); } return false; }); }); `

    Read the article

  • MySQL: Complex Join Statement involving two tables and a third correlation table

    - by Stephen
    I have two tables that were built for two disparate systems. I have records in one table (called "leads") that represent customers, and records in another table (called "manager") that are the exact same customers but "manager" uses different fields (For example, "leads" contains an email address, and "manager" contains two fields for two different emails--either of which might be the email from "leads"). So, I've created a correlation table that contains the lead_id and manager_id. currently this correlation table is empty. I'm trying to query the "leads" table to give me records that match either "manager" email field with the single "leads" email field, while at the same time ignoring fields that have already been added to the "correlated" table. (this way I can see how many leads that match have not yet been correlated.) Here's my current, invalid SQL attempt: SELECT leads.id, manager.id FROM leads, manager LEFT OUTER JOIN correlation ON correlation.lead_id = leads.id WHERE correlation.id IS NULL AND leads.project != "someproject" AND (manager.orig_email = leads.email OR manager.dest_email = leads.email) AND leads.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-10 23:59:59' ORDER BY leads.created ASC; I get the error: Unknown column 'leads.id' in 'on clause' Before you wonder: there are records in the "leads" table where leads.project != "someproject" and leads.created falls between those dates. I've included those additional parameters for completeness.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • pagination and url encoding help

    - by Sufyan
    <?php $name=$_POST['name']; ?> <form method="POST" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <input type="text" name="name"> <input type="submit" value="GO" name="submit"> </form> <?php include ('db.php'); if(isset($_POST['submit'])) { mysql_query ("INSERT INTO example (name) VALUES('$name')") or die(mysql_error()); } if (!isset($_GET['startrow']) or !is_numeric($_GET['startrow'])) { $startrow = 0; } else { $startrow = (int)$_GET['startrow']; } $query = "SELECT * FROM example ORDER BY id DESC LIMIT $startrow, 20"; $result = mysql_query($query) or die(mysql_error()); while($row = mysql_fetch_array($result)){ echo "<li>"; echo $row['name'] ." "." <a href= 'like.php?quote=" . urlencode( $row['name'] ) . "'>Click Here</a>"; echo "</li>"; } echo '<a href="'.$_SERVER['PHP_SELF'].'?startrow='.($startrow+10).'">Next</a>'; ?> I want to make my page links hidden , how can i make then hidden so that a user cant edit it. 2nd question, currently i am showing total 10 records on each page and then a next page button , but the next button is keep showing even when there is no more records...! how to remove a next page button when records ended. ?? line number 28 is the link to pages which can be easyily editable by any user, i wnat to make them secure (using ID) and line 35 is n'next' page link , this link should not be appear when number of records ended

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Need to exclude results in a MySQL query where two table fields are not of certain values (brain far

    - by DondeEstaMiCulo
    I don't know if I'm just burnt out and can't think, or what... But I can't seem to make this work right... (We're using MySQL 5.1...) I have two tables which have some transactional stuff stored in them. There will be many records per user_id in each table. Table1 and Table2 have a one-to-one relationship with each other. I want to pull records from both tables, but I want to exclude records which have certain values in both tables. I don't care if they both don't have these values, or if just one does, but both tables should not have both values. (Does this make any sense? lol) For example: SELECT t1.id, t1.type, t2.name FROM table1 t1 INNER JOIN table2 t2 ON table.xid = table2.id WHERE t1.user_id = 100 AND (t1.type != 'FOO' AND t2.name != 'BAR') So t1.type is type ENUM with about 10 different options, and t2.name is also type ENUM with 2 options. My expected results would look a little like: 1, FOO, YUM 2, BOO, BAR 3, BOO, YUM But instead, all I'm getting is: 3, BOO, YUM Because it's filtering out all records which has 'FOO' as the type, and 'BAR' as the name. I keep waiting for that D'oh! moment where it hits me and I feel like an idiot for not realizing what I'm doing wrong. But it hasn't come. And I still feel like an idiot, lol. I appreciate any light any of you can shed on this! Many thanks in advance for the help!

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >