Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 146/210 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • "select * into table" Will it work for inserting data into existing table

    - by Shantanu Gupta
    I am trying to insert data from one of my existing table into another existing table. Is it possible to insert data into any existing table using select * into query. I think it can be done using union but in that case i need to record all data of my existing table into temporary table, then drop that table and finally than apply union to insert all records into same table eg. select * into #tblExisting from tblExisting drop table tblExisting select * into tblExisting from #tblExisting union tblActualData Here tblExisting is the table where I actually want to store all data tblActualData is the table from where data is to be appended to tblExisting. Is it right method. Do we have some other alternative ?

    Read the article

  • Insert takes too long, code optimization needed

    - by Pentium10
    I have some code I use to transfer a table1 values to another table2, they are sitting in different database. It's slow when I have 100.000 records. It takes forever to finish, 10+ minutes. (Windows Mobile smartphone) What can I do? cmd.CommandText = "insert into " + TableName + " select * from sync2." + TableName+""; cmd.ExecuteNonQuery(); EDIT The problem is not resolved. I am still after answers.

    Read the article

  • Dirty Reads in Postgres

    - by User1
    I have a long running function that should be inserting new rows. How do I check the progress of this function? I was thinking dirty reads would work so I read http://www.postgresql.org/docs/8.4/interactive/sql-set-transaction.html and came up with the following code and ran it in a new session: SET SESSION CHARACTERISTICS AS SERIALIZABLE; SELECT * FROM MyTable; Postgres gives me a syntax error. What am I doing wrong? If I do it right, will I see the inserted records while that long function is still running? Thanks

    Read the article

  • two php arrays - sort one array with the value order of another

    - by Tisch
    Hi there, I have two PHP arrays like so: Array of X records containing the ID of Wordpress posts (in a particular order) Array of Wordpress posts The two arrays look something like this: Array One (Sorted Custom Array of Wordpress Post IDs) Array ( [0] => 54 [1] => 10 [2] => 4 ) Array Two (Wordpress Post Array) Array ( [0] => stdClass Object ( [ID] => 4 [post_author] => 1 ) [1] => stdClass Object ( [ID] => 54 [post_author] => 1 ) [2] => stdClass Object ( [ID] => 10 [post_author] => 1 ) ) I would like to sort the array of wordpress posts with the order of the ID's in the first array. I hope this makes sense, and thanks in advance of any help. Tom edit: The server is running PHP Version 5.2.14

    Read the article

  • Alternative to NOT EXISTS

    - by Dave Colwell
    Hi all, I have two tables linked by an ID column, lets call them Table A and table B. My goal is to find all the records in table A that have no record in table B. For instance: Table A: ID----Value 1-----value1 2-----value2 3-----value3 4-----value4 Table B ID----Value 1-----x 2-----y 4-----z 4-----l As you can see, record with ID = 3 does not exist in table B, so i want a query that will give me record 3 from table A. the way i am currently doing this is by saying AND NOT EXISTS (SELECT ID FROM TableB) but since the tables are huge, the performance on this is terrible. Also, when i tried using a Left Join where TableB.ID is null, it didnt work. Can anyone suggest an alternative?

    Read the article

  • Django. default=datetime.now() problem

    - by Shamanu4
    Hello. I've such db model: from datetime import datetime class TermPayment(models.Model): dev_session = models.ForeignKey(DeviceSession, related_name='payments') user_session = models.ForeignKey(UserSession, related_name='payment') date = models.DateTimeField(default=datetime.now(),blank=True) sum = models.FloatField(default=0) cnt = models.IntegerField(default=0) class Meta: db_table = 'term_payments' ordering = ['-date'] and here new instance is added: # ... tp = TermPayment() tp.dev_session = self.conn.session # device session hash tp.user_session = self.session # user session hash tp.sum = sum tp.cnt = cnt tp.save() But i've a problem: all records in database have the same value in date field - the date of the first payment. After server restart - one record have new date and others have the same as first after restart. It's look like some data cache is using but I can't found where. database: mysql 5.1.25 django v1.1.1

    Read the article

  • Quick way to do data lookup in PHP

    - by Ghostrider
    I have a data table with 600,000 records that is around 25 megabytes large. It is indexed by a 4 byte key. Is there a way to find a row in such dataset quickly with PHP without resorting to MySQL? The website in question is mostly static with minor PHP code and no database dependencies and therefore fast. I would like to add this data without having to use MySQL if possible. In C++ I would memory map the file and do a binary search in it. Is there a way to do something similar in PHP?

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • How to name uploaded files in php to prevent them from being overwritten?

    - by user156814
    I'm trying to add user submitted articles to my website, (only for admins). With each article comes an option to upload up to 3 images. My database is set up like this Articles id user_id title body date_added last_edited Photos id (auto_increment) article_id First I save the article in the database, then I upload the photo (temporarily) then I create a new photo record in the database saving the article_id. Then I rename the uploaded photo to be the same as the primary key of the photo record, and to be a png. $filename = $photo->id . '.png'; I figured this would be a good way to prevent files form being overwritten. This seems flawed to me. Any suggestions on how I should save my records and photos? Thanks

    Read the article

  • Best way to implement a List(Of) with a maximum number of items

    - by Ben
    I'm trying to figure out a good way of implementing a List(Of) that holds a maximum number of records. e.g. I have a List(Of Int32) - it's being populated every 2 seconds with a new Int32 item. I want to store only the most current 2000 items. How can I make the list hold a maximum of 2000 items, then when the 2001'th item is attempted to be added, the List drops the 2000'th item (resulting in the current total being 1999). Thing is, I need to make sure I'm dropping only the oldest item and adding a new item into the List. Ben

    Read the article

  • How to block the possibility to add the same record to a SPList?

    - by truthseeker
    Hi, Is there a possibility to block chance to add the same data to SPList? I know that two records always are different regarding the ID field. I would like to validate other custom fields added previously by me, and don't allow of adding same field's value. Can anybody tell me how to implement this? I can guess that event receivers could be the answer but I couldn't find how to add a receiver to SPList. Can anybody tel me If I'm right and what is step by step procedure to add such event receiver? I would like to know how to build it and install it using Feature file. Best Regards T.S.

    Read the article

  • A null value should be considered as 0 when addition of it is done with decimal value.

    - by Harikrishna
    There are three column in the datatable A,B and C. Now each column is type of decimal. Now I am doing like dt.Columns["A"].Expression="B+C"; to make addition of Column B's record and column C's record. Now if there is any value of B or C is null then addition of B and C will be null like B's value is 3 and C's value is null for first row then B+C(3+null) will be null which is not appropriate, the result of addition should be 3.If I replace 0 instead of null then it will be ok.But whereever there is null value in the records it should be remain it is and it should not be replaced by 0.That is null value should not be replaced by 0 and when addition of null value is done with any decimal value null value should be considered as 0. Is it possible,how can we do this ?

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • iPhone NSMutableDictionary setObject forKey after initWithContentsOfFile

    - by andrew
    so here's my function + (void) AddAddress:(NSString*)ip:(NSString*)mac { NSMutableDictionary* currentAddresses = [[NSMutableDictionary alloc] initWithContentsOfFile:[self filePath:ADDRESSES_FILE]]; [currentAddresses setObject:mac forKey:ip]; [Configuration Write:currentAddresses]; [currentAddresses release]; } [self filePath:ADDRESSES_FILE] returns a string containing the full path of a file (ADDRESSES_FILE) located in the Documents. The file may not exist, in this case I want to just write an object to the dictionary and then write it down the problem is that when i do setObject forKey it doesn't add anything to the dict (if i just do alloc init to the dict, it works fine, but i want to read the records from the file, if there are any, and replace/add a record by using the AddAddress function)

    Read the article

  • JsonStore.insert() causes exception in extjs

    - by kalan
    I have an EditorGridPanel with toolbar button to add new records. Everything works fine except one scenario. When I try to insert a record which already exists in database, server sends back: {"success":false,"message":"already exists","data":{}} but grid creates a new row marked with red triangle. If after that I try to insert a new record (even if it doesn't exist in database), everything works fine on the server side, but i get an 'uncaught exception' in firebug. It says: 'uncaught exception: Ext.data.DataReader: #realize was called with invalid remote-data. Please see the docs for DataReader#realize and review your DataReader configuration.' why is that?

    Read the article

  • RewriteRule and php download counter

    - by rcourtna
    (1) I have a site that serves up MP3 files: http://domain/files/1234567890.mp3 (2) I have a php script that tracks file download counts: http://domain/modules/download_counter.php?file=/files/1234567890.mp3 After download_counter.php records the download, it redirects to the original file: Header("Location: $FQDN_url"); (3) I'd like all my public links to be presented as the direct file urls from (1). I'm trying to use Apache to redirect the requests to download_counter.php: RewriteRule ^files/(.+\.mp3)$ /modules/download_counter.php?file=/files/$1 [L] I'm currently stuck on (3), as it results in a redirect loop, since download_counter.php simply redirects the request back to the original file (rather than streaming the file contents). I'm also motivated to use download_counter.php as is (without modifying it's redirect behaviour). This is because the script is part of a larger CMS module, and I'd like to avoid complicating my upgrade path. Perhaps there is no solution to my problem (other than modifying the download_counter script). WDYT?

    Read the article

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • Ideas for loading initial data in an iPhone application?

    - by Derek Clarkson
    Hi all, In my app I want to load some initial data to show the user how it works. The app uses a CoreData managed sqllite db. SO far I've thought of 3 options: Write code into a class to programmatically create the data. Create a xml file in the apps resources and load through a NSXmlParser whose delegate creates the entries in the sqllite db. Same as option #2, but use a json file and bring in a 3rd party lib to read it. Are there other options I have not found yet? and given that I'm talking about perhaps 6 records per table when there are 3 tables, which would you choose?

    Read the article

  • Is the time cost constant when bulk inserting data into an indexed table?

    - by SiLent SoNG
    I have created an archive table which will store data for selecting only. Daily there will be a program to transfer a batch of records into the archive table. There are several columns which are indexed; while others are not. I am concerned with time cost per batch insertion: - 1st batch insertion: N1 - 2nd batch insertion: N2 - 3rd batch insertion: N3 The question is: will N1, N2, and N3 roughly be the same, or N3 N2 N1? That is, will the time cost be a constant or incremental, with existence of several indexes? All indexes are non-clustered. The archive table structure is this: create table document ( doc_id int unsigned primary key, owner_id int, -- indexed title smalltext, country char(2), year year(4), time datetime, key ix_owner(owner_id) }

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

  • mysql data being inserted twice via php

    - by Jascha
    I can't for the life of me figure out why this function is causing multiple entries into my database table... When I run the function I end up with two records stacked on top of each one second apart here is the function: function generate_signup_token(){ $connection = new DB_Connect(); // <--- my database connection class $ip = mysql_real_escape_string($_SERVER['REMOTE_ADDR']); $sign_up_token = uniqid(mt_rand(), true); $_SESSION['signup_token'] = $sign_up_token; $sign_up_token = mysql_real_escape_string($sign_up_token); $query = "INSERT INTO `token_manager` (`ip_address`, `signup_token`) VALUES ('$ip', '$sign_up_token')"; mysql_query($query); } generate_signup_token();

    Read the article

  • Scala case class generated field value

    - by Petteri Hietavirta
    I have an existing Scala application and it uses case classes which are then persisted in MongoDB. I need to introduce a new field to a case class but the value of it is derived from existing field. For example, there is phone number and I want to add normalised phone number while keeping the original phone number. I'll update the existing records in MongoDB but I would need to add this normalisation feature to existing save and update code. So, is there any nice shortcut in Scala to add a "hook" to a certain field of a case class? For example, in Java one could modify setter of the phone number.

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • How to use Zend_Cache Identifier ?

    - by ArneRie
    Hi Folks, i think iam getting crazy, iam trying to implement Zend_Cache to cache my database query. I know how it works and how to configure. But i cant find a good way to set the Identifier for the cache entrys. I have an method wich search for records in my database (based on an array with search values). /** * Find Record(s) * Returns one record, or array with objects * * @param array $search Search columns => value * @param integer $limit Limit results * @return array One record , or array with objects */ public function find(array $search, $limit = null) { $identifier = 'NoIdea'; if (!($data = $this->_cache->load($identifier))) { // fetch // save to cache with $identifier.. } But what kind of identifier can use in this situation?

    Read the article

  • Removing values from a returned linq query

    - by Diver D
    HI there I am hoping for some help with a query I have. I have this query var group = from r in CustomerItem group r by r.StoreItemID into g select new { StoreItemID = g.Key, ItemCount = g.Count(), ItemAmount = Customer.Sum(cr => cr.ItemAmount),RedeemedAmount = Customer.Sum(x => x.RedeemedAmount) }; I am returning my results to a list so I can bind it listbox. I have a property called EntryType which is an int. There are 2 available numbers 1 or 2 Lets say I had 3 items that my query is working with 2 of them had the EntryType = 1 and the 3rd had EntryType2. The first records had a ItemAmount of 55.00 and the 3rd had a ItemAmount of 50.00 How can I group using something simlar to above but minus the ItemAmount of 50.00 from the grouped amount to return 60.00? Any help would be great!!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >