Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 103/286 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • WinForms binding

    - by Anthony
    I have some controls bound to a BindingSource control. I want to do a calculation when the value changes in one control and set the result on another control. Do i update the textbox the property is bound to or do i update the underlying entity which would update the control anyway(i hope)? When i change textbox A...textbox B is updated with the new calculated result..this works fine...but i have noticed that when i Leave textbox A..textbox B reverts back to its original value...what is going on here!

    Read the article

  • Python Expand Tabs Length Calculation

    - by Mithrill
    I'm confused by how the length of a string is calculated when expandtabs is used. I thought expandtabs replaces tabs with the appropriate number of spaces (with the default number of spaces per tab being 8). However, when I ran the commands using strings of varying lengths and varying numbers of tabs, the length calculation was different than I thought it would be (i.e., each tab didn't always result in the string length being increased by 8 for each instance of "/t"). Below is a detailed script output with comments explaining what I thought should be the result of the command executed above. Would someone please explain the how the length is calculated when expand tabs is used? IDLE 2.6.5 >>> s = '\t' >>> print len(s) 1 >>> #the length of the string without expandtabs was one (1 tab counted as a single space), as expected. >>> print len(s.expandtabs()) 8 >>> #the length of the string with expandtabs was eight (1 tab counted as eight spaces). >>> s = '\t\t' >>> print len(s) 2 >>> #the length of the string without expandtabs was 2 (2 tabs, each counted as a single space). >>> print len(s.expandtabs()) 16 >>> #the length of the string with expandtabs was 16 (2 tabs counted as 8 spaces each). >>> s = 'abc\tabc' >>> print len(s) 7 >>> #the length of the string without expandtabs was seven (6 characters and 1 tab counted as a single space). >>> print len(s.expandtabs()) 11 >>> #the length of the string with expandtabs was NOT 14 (6 characters and one 8 space tabs). >>> s = 'abc\tabc\tabc' >>> print len(s) 11 >>> #the length of the string without expandtabs was 11 (9 characters and 2 tabs counted as a single space). >>> print len(s.expandtabs()) 19 >>> #the length of the string with expandtabs was NOT 25 (9 characters and two 8 space tabs). >>>

    Read the article

  • How to do an fetch request with expressions like this on the iPhone?

    - by dontWatchMyProfile
    The documentation has an example on how to retrieve simple values only, rather than managed objects. This remembers a lot SQL using aliases and functions to only retrieve calculated values. So, actually pretty geeky stuff. To get the minimum date from a bunch of records, this is used on the mac: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:context]; [request setEntity:entity]; // Specify that the request should return dictionaries. [request setResultType:NSDictionaryResultType]; // Create an expression for the key path. NSExpression *keyPathExpression = [NSExpression expressionForKeyPath:@"creationDate"]; // Create an expression to represent the minimum value at the key path 'creationDate' NSExpression *minExpression = [NSExpression expressionForFunction:@"min:" arguments:[NSArray arrayWithObject:keyPathExpression]]; // Create an expression description using the minExpression and returning a date. NSExpressionDescription *expressionDescription = [[NSExpressionDescription alloc] init]; // The name is the key that will be used in the dictionary for the return value. [expressionDescription setName:@"minDate"]; [expressionDescription setExpression:minExpression]; [expressionDescription setExpressionResultType:NSDateAttributeType]; // Set the request's properties to fetch just the property represented by the expressions. [request setPropertiesToFetch:[NSArray arrayWithObject:expressionDescription]]; // Execute the fetch. NSError *error; NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // Handle the error. } else { if ([objects count] > 0) { NSLog(@"Minimum date: %@", [[objects objectAtIndex:0] valueForKey:@"minDate"]; } } [expressionDescription release]; [request release]; Nice, I though - but having a deep look into NSExpression -expressionForFunction:arguments: it turns out that iPhone OS does NOT support the min: function. Well, probably there's a nifty way to use an own function for this kind of stuff on the iPhone as well? Because on thing I'm already worrying about is, how I'm gonna sort a table based on the calculated distance of targets on a map (location-based stuff).

    Read the article

  • How does a hash table work?

    - by Arec Barrwin
    I'm looking for an explanation of how a hashtable works - in plain English for a simpleton like me! For example I know it takes the key, calculates the hash (how?) and then performs some kind of modulo to work out where it lies in the array that the value is stored, but that's where my knowledge stops. Could anyone clarify the process. Edit: I'm not looking specifically about how hashcodes are calculated, but a general overview of how a hashtable works.

    Read the article

  • How to calculate next Friday at 3am?

    - by Mark
    How can you calculate the following Friday at 3am as a datetime object? Clarification: i.e., the calculated date should always be greater than 7 days away, and less than or equal to 14. Going with a slightly modified version of Mark's solution: def next_weekday(dt=datetime.datetime.now(), time_of_day=datetime.time(hour=3), day_of_week=4): dt += datetime.timedelta(days=7) if dt.time() < time_of_day: dt = dt.combine(dt.date(), time_of_day) else: dt = dt.combine(dt.date(), time_of_day) + datetime.timedelta(days=1) return dt + datetime.timedelta((day_of_week - dt.weekday()) % 7)

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • How to generate automatic response from the server in flex

    - by Jeeva
    I'm developing a web based game application using flex. I'm placing images dynamically inside a canvas . Whenever i place an image it should change it state after a period of time. For example after 2 mins it should change it state. The time has to be calculated for each image after i place the image. How shall i handle this.

    Read the article

  • can i use MAX function for each tuple in the retrieved data set

    - by kshama
    Hi, My table result contains fields: id count ____________ 1 3 2 2 3 2 From this table i have to form another table **score** which should look as follows id my_score ___________ 1 1.0000 2 0.6667 3 0.6667 That is my_score=count/MAX(count) but if i give the query as create TEMPORARY TABLE(select id,(count/MAX(count)) AS my_score from result); only 1 st row is retrieved.Can any one suggest the query so that my_score is calculated for all tuples. Thanks in advance.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Rating System Database Structure

    - by Harsha M V
    I have two entity groups. Restaurants and Users. Restaurants can be rated (1-5) by users. And rating fromeach user should be retrievable. Resturant(id, name, ..... , total_number_of_votes, total_voting_points ) User (id, name ...... ) Rating (id, restaurant_id, user_id, rating_value) Do i need to store the avg value so that it need not be calculated every time ? which table is the best place to store avg_rating, total_no_of_votes, total_voting_points ?

    Read the article

  • PHP max_execution_time not timing out

    - by Joey Ezekiel
    This is not one of the regular questions if sleep is counted for timeout or stuff like that. Ok, here's the problem: I've set the max_execution_time for PHP as 15 seconds and ideally this should time out when it crosses the set limit, but it doesn't. Apache has been restarted after the change to the php.ini file and an ini_get('max_execution_time') is all fine. Sometimes the script runs for upto 200 seconds which is crazy. I have no database communication whatsoever. All the script does is looking for files on the unix filesystem and in some cases re-directing to another JSP page. There is no sleep() on the script. I calculate the total execution time of the PHP script like this: At the start of the script I set : $_mtime = microtime(); $_mtime = explode(" ",$_mtime); $_mtime = $_mtime[1] + $_mtime[0]; $_gStartTime = $_mtime; and the end time($_gEndTime) is calculated similarly. The total time is calculated in a shutdown function that I've registered: register_shutdown_function('shutdown'); ............. function shutdown() { .............. .............. $_total_time = $_gEndTime - $_gStartTime; .............. switch (connection_status ()) { case CONNECTION_NORMAL: .... break; .... case CONNECTION_TIMEOUT: .... break; ...... } } Note: I cannot use $_SERVER['REQUEST_TIME'] because my PHP version is incompatible. That sucks - I know. 1) Well, my first question obviously is is why is my PHP script executing even after the set timeout limit? 2) Apache has the Timeout directive which is 300 seconds but the PHP binary does not read the Apache config and this should not be a problem. 3) Is there a possibility that something is sending PHP into a sleep mode? 4) Am I calculating the execution time in a wrong way? Is there a better way to do this? I'm stumped at this point. PHP Wizards - please help.

    Read the article

  • Symfony 1.4/ Doctrine; n-m relation data cannot be accessed in template (indexSuccess)

    - by chandimak
    I have a database with 3 tables. It's a simple n-m relationship. Student, Course and StudentHasCourse to handle n-m relationship. I post the schema.yml for reference, but it would not be really necessary. Course: connection: doctrine tableName: course columns: id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false name: type: string(45) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: StudentHasCourse: local: id foreign: course_id type: many Student: connection: doctrine tableName: student columns: id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false registration_details: type: string(45) fixed: false unsigned: false primary: false notnull: false autoincrement: false name: type: string(30) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: StudentHasCourse: local: id foreign: student_id type: many StudentHasCourse: connection: doctrine tableName: student_has_course columns: student_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false course_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false result: type: string(1) fixed: true unsigned: false primary: false notnull: false autoincrement: false relations: Course: local: course_id foreign: id type: one Student: local: student_id foreign: id type: one Then, I get data from tables in executeIndex() from the following query. $q_info = Doctrine_Query::create() ->select('s.*, shc.*, c.*') ->from('Student s') ->leftJoin('s.StudentHasCourse shc') ->leftJoin('shc.Course c') ->where('c.id = 1'); $this->infos = $q_info->execute(); Then I access data by looping through in indexSuccess.php. But, in indexSuccess I can only access data from the table Student. <?php foreach ($infos as $info): ?> <?php echo $info->getId(); ?> <?php echo $info->getName(); ?> <?php endforeach; ?> I expected, that I could access StudentHasCourse data and Course data like the following. But, it generates an error. <?php echo $info->getStudentHasCourse()->getResult()?> <?php echo $info->getStudentHasCourse()->getCourse()->getName()?> The first statement gives a warning; Warning: call_user_func_array() expects parameter 1 to be a valid callback, class 'Doctrine_Collection' does not have a method 'getCourse' in D:\wamp\bin\php\php5.3.5\PEAR\pear\symfony\escaper\sfOutputEscaperObjectDecorator.class.php on line 64 And the second statement gives the above warning and the following error; Fatal error: Call to a member function getName() on a non-object in D:\wamp\www\sam\test_doc_1\apps\frontend\modules\registration\templates\indexSuccess.php on line 5 When I check the query from the Debug toolbar it appears as following and it gives all data I want. SELECT s.id AS s__id, s.registration_details AS s__registration_details, s.name AS s__name, s2.student_id AS s2__student_id, s2.course_id AS s2__course_id, s2.result AS s2__result, c.id AS c__id, c.name AS c__name FROM student s LEFT JOIN student_has_course s2 ON s.id = s2.student_id LEFT JOIN course c ON s2.course_id = c.id WHERE (c.id = 1) Though the question is short, as all the information mentioned it became so long. It's highly appreciated if someone can help me out to solve this. What I require is to access the data from StudentHasCourse and Course. If those data cannot be accessed by this design and this query, any other methodology is also appreciated.

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • will mmap use user cpu instead of whole sys cpu? (solaris)

    - by Daniel
    when use mmap to allocate some anonymous mem, we often set the start address as 0/null so mmap will figure out the starting address by itself. And to get the start address, it will work thought the whole virtual memory space to find a hole which could put the chuck of mem to be allocated. I guess this is calculated as user cpu instead of sys cpu. If the virtual memory is fragmented, then the time to find the starting address will use more user cpu, is my understanding correct

    Read the article

  • Join and sum not compatible matrices through data.table

    - by leodido
    My goal is to "sum" two not compatible matrices (matrices with different dimensions) using (and preserving) row and column names. I've figured this approach: convert the matrices to data.table objects, join them and then sum columns vectors. An example: > M1 1 3 4 5 7 8 1 0 0 1 0 0 0 3 0 0 0 0 0 0 4 1 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 > M2 1 3 4 5 8 1 0 0 1 0 0 3 0 0 0 0 0 4 1 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 > M1 %ms% M2 1 3 4 5 7 8 1 0 0 2 0 0 0 3 0 0 0 0 0 0 4 2 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 This is my code: M1 <- matrix(c(0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0), byrow = TRUE, ncol = 6) colnames(M1) <- c(1,3,4,5,7,8) M2 <- matrix(c(0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), byrow = TRUE, ncol = 5) colnames(M2) <- c(1,3,4,5,8) # to data.table objects DT1 <- data.table(M1, keep.rownames = TRUE, key = "rn") DT2 <- data.table(M2, keep.rownames = TRUE, key = "rn") # join and sum of common columns if (nrow(DT1) > nrow(DT2)) { A <- DT2[DT1, roll = TRUE] A[, list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1), by = rn] } That outputs: rn X1 X3 X4 X5 X7 X8 1: 1 0 0 2 0 0 0 2: 3 0 0 0 0 0 0 3: 4 2 0 0 0 0 0 4: 5 0 0 0 0 0 0 5: 7 0 0 0 0 1 0 6: 8 0 0 0 0 0 0 Then I can convert back this data.table to a matrix and fix row and column names. The questions are: how to generalize this procedure? I need a way to automatically create list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1) because i want to apply this function to matrices which dimensions (and row/columns names) are not known in advance. In summary I need a merge procedure that behaves as described. there are other strategies/implementations that achieve the same goal that are, at the same time, faster and generalized? (hoping that some data.table monster help me) to what kind of join (inner, outer, etc. etc.) is assimilable this procedure? Thanks in advance. p.s.: I'm using data.table version 1.8.2 EDIT - SOLUTIONS @Aaron solution. No external libraries, only base R. It works also on list of matrices. add_matrices_1 <- function(...) { a <- list(...) cols <- sort(unique(unlist(lapply(a, colnames)))) rows <- sort(unique(unlist(lapply(a, rownames)))) out <- array(0, dim = c(length(rows), length(cols)), dimnames = list(rows,cols)) for (m in a) out[rownames(m), colnames(m)] <- out[rownames(m), colnames(m)] + m out } @MadScone solution. Used reshape2 package. It works only on two matrices per call. add_matrices_2 <- function(m1, m2) { m <- acast(rbind(melt(M1), melt(M2)), Var1~Var2, fun.aggregate = sum) mn <- unique(colnames(m1), colnames(m2)) rownames(m) <- mn colnames(m) <- mn m } BENCHMARK (100 runs with microbenchmark package) Unit: microseconds expr min lq median uq max 1 add_matrices_1 196.009 257.5865 282.027 291.2735 549.397 2 add_matrices_2 13737.851 14697.9790 14864.778 16285.7650 25567.448 No need to comment the benchmark: @Aaron solution wins. I'll continue to investigate a similar solution for data.table objects. I'll add other solutions eventually reported or discovered.

    Read the article

  • Ruby: Parse, replace, and evaluate a string formula

    - by Swartz
    I'm creating a simple Ruby on Rails survey application for a friend's psychological survey project. So we have surveys, each survey has a bunch of questions, and each question has one of the options participants can choose from. Nothing exciting. One of the interesting aspects is that each answer option has a score value associated with it. And so for each survey a total score needs to be calculated based on these values. Now my idea is instead of hard-coding calculations is to allow user add a formula by which the total survey score will be calculated. Example formulas: "Q1 + Q2 + Q3" "(Q1 + Q2 + Q3) / 3" "(10 - Q1) + Q2 + (Q3 * 2)" So just basic math (with some extra parenthesis for clarity). The idea is to keep the formulas very simple such that anyone with basic math can enter them without resolving to some fancy syntax. My idea is to take any given formula and replace placeholders such as Q1, Q2, etc with the score values based on what the participant chooses. And then eval() the newly formed string. Something like this: f = "(Q1 + Q2 + Q3) / 2" # some crazy formula for this survey values = {:Q1 => 1, :Q2 => 2, :Q3 => 2} # values for substitution result = f.gsub(/(Q\d+)/) {|m| values[$1.to_sym] } # string to be eval()-ed eval(result) So my questions are: Is there a better way to do this? I'm open to any suggestions. How to handle formulas where not all placeholders were successfully replaced (e.g. one question wasn't answered)? Ex: {:Q3 = 2} wasn't in values hash? My idea is to rescue eval()... any thoughts? How to get proper result? Should be 2.5, but due to integer arithmetic, it will truncate to 2. I can't expect people who provide the correct formula (e.g. / 2.0 ) to understand this nuance. I do not expect this, but how to best protect eval() from abuse (e.g. bad formula, manipulated values coming in)? Thank you!

    Read the article

  • Looking for interesing formula

    - by Thinker
    I'm creating a game, where players can make an alloy. To make it less predictable, and more interesting, I thought that durability and hardness of an alloy can't be calculated by simple formula, because it will be extremely easy to find extrema, where alloy have best statistics. So the questions is, is there any formula for a function, where extrema can be found only by investigating all points? Input values will be in percents: 0.0%-100.0%. I think, it should look like this: half sound wave

    Read the article

  • Distributions and hashes

    - by Don Mackenzie
    Has anyone ever had an incidence of downloading software from a genuine site, where an MD5 or SHA series hash for the download is also supplied and then discovered that the hash calculated from the downloaded artifact doesn't match the published hash? I understand the theory but am curious how prevalent the problem is. Many software publishers seem to discount the threat.

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • How to process large block data visualization with Flex?

    - by hydra1983
    I know that's a big topic. However, it's better to know some general ideas to handle such problems. I have an application which requires Flex to render statistics data calculated instantly on the client side from a downloaded data set. The problems are: the data set is large and needs more than 10 seconds to be downloaded. there are some filters to control the statistics calculation algorithms. If user changes the filters, it would take a long time to recalculate the result and freeze the UI.

    Read the article

  • c# calculation help

    - by Jonas B
    Hi, I'm pretty useless when it comes to math and I have a problem I need help with. This has nothing to do with schoolwork, it's in fact about alcatel and the ticketextractor. I have two values that needs to be calculated in a c# application according to a formula specified in their documentation: "The global callid is equal to: callid1 multiplied by 2 power 32 plus callid2" As I said I'm not big with maths so that statement says nothing to me. If anyone know how to calculate it i'd appreciate it! Thanks

    Read the article

  • Looking for interesting formula

    - by Thinker
    I'm creating a game where players can make an alloy. To make it less predictable and more interesting, I thought that the durability and hardness of an alloy should not be calculated by a simple formula, because it will be extremely easy to find extrema, where alloy have best statistics. So the questions is, is there any formula for a function where extrema can be found only by investigating all points? Input values will be in percents: 0.0%-100.0%. I think it should look like this: half sound wave

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >