Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 18/256 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to reduce disk thrashing (paging)?

    - by skevar7
    I have 4 GB of RAM, but Windows still thrashes disk sometimes (especially often when an application is minimized for some time and then I activate it again). Completely stupid, because Task Manager shows 2 GB of RAM are free. Is there any way to prevent Windows swapping out program memory? I tried setting Superfetch to cache startup files only (it helped a bit) and turning off paging file (it helped much, and worked well for me in Windows XP; but Windows Vista/Windows 7 don't allow that - it shows "low on memory" message frequently, even when I have 1 GB of RAM free.) What can you advise me to do?

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • How to reduce the pain of the command prompt

    - by Adam
    I want to learn to use the command prompt better on Windows to have more control over what I do and just for the learning experience. The main annoyance I have right now is all of the typing. If I want to perform an operation on a file with a large path I'm sitting there typing it out for a minute at least, and if I make a mistake I have to press the up arrow key and scroll through the entire thing and find what I did wrong. Is there any tools to make this easier?

    Read the article

  • How to reduce Windows XP computer boot time?

    - by Suma
    Are there any specific known steps which I could take to make my computer with Windows XP Professional booting faster? I am interested in speeding up following stages in particular: loading the OS (Windows logo, up to the moment login screen appears) log in user (from the moment you type your user name and password up to the moment all memory resident programs and services are loaded and the computer is really ready to use)

    Read the article

  • getting mysql_insert_id() while using ON DUPLICATE KEY UPDATE with PHP

    - by julio
    Hi-- I've found a few answers for this using mySQL alone, but I was hoping someone could show me a way to get the ID of the last inserted or updated row of a mysql DB when using PHP to handle the inserts/updates. Currently I have something like this, where column3 is a unique key, and there's also an id column that's an autoincremented primary key: $query ="INSERT INTO TABLE (column1, column2, column3) VALUES (value1, value2, value3) ON DUPLICATE KEY UPDATE SET column1=value1, column2=value2, column3=value3"; mysql_query($query); $my_id = mysql_insert_id(); $my_id is correct on INSERT, but incorrect when it's updating a row (ON DUPLICATE KEY UPDATE). I have seen several posts with people advising that you use something like INSERT INTO table (a) VALUES (0) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id) to get a valid ID value when the ON DUPLICATE KEY is invoked-- but will this return that valid ID to the PHP "mysql_insert_id()" function? Thanks for any advice.

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • Remove duplicate records/objects uniquely identified by multiple attributes

    - by keruilin
    I have a model called HeroStatus with the following attributes: id user_id recordable_type hero_type (can be NULL!) recordable_id created_at There are over 100 hero_statuses, and a user can have many hero_statuses, but can't have the same hero_status more than once. A user's hero_status is uniquely identified by the combination of recordable_type + hero_type + recordable_id. What I'm trying to say essentially is that there can't be a duplicate hero_status for a specific user. Unfortunately, I didn't have a validation in place to assure this, so I got some duplicate hero_statuses for users after I made some code changes. For example: user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2010-05-03 18:30:30' user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2009-03-03 15:30:00' user_id = 18 recordable_type = 'Good' hero_type = 'Hugs' recordable_id = 1 created_at = '2009-02-03 12:30:00' user_id = 18 recordable_type = 'Good' hero_type = NULL recordable_id = 2 created_at = '2009-012-03 08:30:00' (Last two are not a dups obviously. First two are.) So what I want to do is get rid of the duplicate hero_status. Which one? The one with the most-recent date. I have three questions: How do I remove the duplicates using a SQL-only approach? How do I remove the duplicates using a pure Ruby solution? Something similar to this: http://stackoverflow.com/questions/2790004/removing-duplicate-objects. How do I put a validation in place to prevent duplicate entries in the future?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • MySQL ON DUPLICATE KEY UPDATE issue

    - by user644347
    Hi could some one look at this and tell me where I am going wrong. I have an SQL statement that when I echo using php I get this to screen INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '18' , 'GenreName' = 'Drama' ON DUPLICATE KEY UPDATE 'GenreName' = 'Drama' WHERE 'GenreID' = '18' INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '16' , 'GenreName' = 'Animation' ON DUPLICATE KEY UPDATE 'GenreName' = 'Animation' WHERE 'GenreID' = '16' And here is the statement $sql="INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '{$genresID[$i]}' , 'GenreName' = '{$genreName[$i]}' ON DUPLICATE KEY UPDATE 'GenreName' = '{$genreName[$i]}' WHERE 'GenreID' = '{$genresID[$i]}'"; This is the error I recieve: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''moviedb'.'genre' SET 'GenreID' = '18' , 'GenreName' = 'Drama' ON DUPLICATE KEY ' at line 1 Any help would be greatly appreciated, thanks in advance.

    Read the article

  • integer constant does 'not reduce to an integer'

    - by Dan Morgan
    I use this code to set my constants // Constants.h extern NSInteger const KNameIndex; // Constants.m NSInteger const KNameIndex = 0; And in a switch statement within a file that imports the Constant.h file I have this: switch (self.sectionFromParentTable) { case KNameIndex: self.types = self.facilityTypes; break; ... I get error at compile that read this: "error:case label does not reduce to an integer constant" Any ideas what might be messed up?

    Read the article

  • Reduce EBS volume

    - by Martin
    I know about increasing, but is there a way reduce the size of an EBS volume? Like I've put effort into my AMI but soon realized it's way to big for my needs.

    Read the article

  • How to reduce cpu and ram usage?

    - by Hellboy
    i am going to "read" (video/big) files from server (shared environment) to clients (webbrowsers) via php and would like to know first if there is a way to reduce cpu and ram usage somehow (as i have those limited). thanks.

    Read the article

  • prolog: reduce then write the value of a predicate

    - by jreid9001
    This is some of the code I am writing assert(bar(foo)), assert(foo(bar-5)), I'm not sure if it works though. I'm trying to get it to reduce foo by 5. I need a way to write the value of foo, but haven't found a way too. write('foo is' + foo) would be the logical way to me, but doesn't seem to work.

    Read the article

  • Create thumbnail and reduce image size

    - by oo
    I have very large images (jpg) and i want to write a csharp program to loop through the files and reduce the size of each image by 75%. I tried this: Image thumbNail = image.GetThumbnailImage(800, 600, null, new IntPtr()); but the file size is still very large. Is there anyway to create thumbnails and have the filesize be much smaller?

    Read the article

  • Reduce Processing Time of accessing databse

    - by medma
    hello all, I m making an app which requires remote databse connection. I want the values in picker from database but when I click on button to invoke picker it takes some time to fetch the values and displaying. Is there any way to do it fast? and also is there any way to reduce the time of transition between 2 views? Thanx

    Read the article

  • "reduce" or "apply" using logical functions in Clojure

    - by Alex B
    I cannot use logical functions on a range of booleans in Clojure (1.2). Neither of the following works due to logical functions being macros: (reduce and [... sequence of bools ...]) (apply or [... sequence of bools ...]) The error saying that I "can't take value of a macro: #'clojure.core/and". How to apply these logical functions (macros) without writing boilerplate code?

    Read the article

  • Reduce the number of additional Queries to 0 by overriding functions in the base model

    - by user334017
    my basic database setup is: User:... Info: relations: User: { foreignType:one } When displaying information on the user it takes: 1 query to find info on the user, and 1 query to find additional info I want to reduce this to one query that finds both, I assume I need to override a function from BaseUser.class.php, or something along those lines but I'm not really sure what to do. Thanks!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >