Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 84/589 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Optimizing a 3D World Javascript Animation

    - by johnny
    Hi! I've recently come up with the idea to create a tag cloud like animation shaped like the earth. I've extracted the coastline coordinates from ngdc.noaa.gov and wrote a little script that displayed it in my browser. Now as you can imagine, the whole coastline consists of about 48919 points, which my script would individually render (each coordinate being represented by one span). Obviously no browser is capable of rendering this fluently - but it would be nice if I could render as much as let's say 200 spans (twice as much as now) on my old p4 2.8 Ghz (as a representative benchmark). Are there any javascript optimizations I could use in order to speed up the display of those spans? One 'coordinate': <div id="world_pixels"> <span id="wp_0" style="position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;" onmouseover="magnify_world_pixel('wp_0');" onmouseout="shrink_world_pixel('wp_0');" onClick="set_askcue_bar('', 'new york')">new york</span> </div> The script: $(document).ready(function(){ world_pixels = $("#world_pixels span"); world_pixels.spin(); setInterval("world_pixels.spin()",1500); }); z = new Array(); $.fn.spin = function () { for(i=0; i<this.length; i++) { /*actual screen coordinates: x/y/z --> left/font-size/top 300/13/0 300/6/300 | / |/ 0/13/300 ----|---- 600/13/300 /| / | 300/20/300 300/13/600 */ /*scale font size*/ var resize_x = 1; /*scale width*/ var resize_y = 2.5; /*scale height*/ var resize_z = 2.5; var from_left = 300; var from_top = 20; /*actual math coordinates: 1 -1 | / |/ 1 ----|---- -1 /| / | 1 -1 */ //var get_element = document.getElementById(); //var font_size = parseInt(this.style.fontSize); var font_size = parseInt($(this[i]).css("font-size")); var left = parseInt($(this[i]).css("left")); if (coast_line_array[i][1]) { } else { var top = parseInt($(this[i]).css("top")); z[i] = from_top + (top - (300 * resize_z)) / (300 * resize_z); //global beacause it's used in other functions later on var top_new = from_top + Math.round(Math.cos(coast_line_array[i][2]/90*Math.PI) * (300 * resize_z) + (300 * resize_z)); $(this[i]).css("top", top_new); coast_line_array[i][3] = 1; } var x = resize_x * (font_size - 13) / 7; var y = from_left + (left- (300 * resize_y)) / (300 * resize_y); if (y >= 0) { this[i].phi = Math.acos(x/(Math.sqrt(x^2 + y^2))); } else { this[i].phi = 2*Math.PI - Math.acos(x/(Math.sqrt(x^2 + y^2))); i } this[i].theta = Math.acos(z[i]/Math.sqrt(x^2 + y^2 + z[i]^2)); var font_size_new = resize_x * Math.round(Math.sin(coast_line_array[i][4]/90*Math.PI) * Math.cos(coast_line_array[i][0]/180*Math.PI) * 7 + 13); var left_new = from_left + Math.round(Math.sin(coast_line_array[i][5]/90*Math.PI) * Math.sin(coast_line_array[i][0]/180*Math.PI) * (300 * resize_y) + (300 * resize_y)); //coast_line_array[i][6] = coast_line_array[i][7]+1; if ((coast_line_array[i][0] + 1) > 180) { coast_line_array[i][0] = -180; } else { coast_line_array[i][0] = coast_line_array[i][0] + 0.25; } $(this[i]).css("font-size", font_size_new); $(this[i]).css("left", left_new); } } resize_x = 1; function magnify_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*30+"px" }, { duration: 1000 }); } function shrink_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*6+"px" }, { duration: 1000 }); } I'd appreciate any suggestions to optimize my script, maybe there is even a totally different approach on how to go about this. The whole .js file which stores the array for all the coordinates is available on my page, the file is about 2.9 mb, so you might consider pulling the .zip for local testing: metaroulette.com/files/31218.zip metaroulette.com/files/31218.js P.S. the php I use to create the spans: <?php //$arbitrary_characters = array('a','b','c','ddsfsdfsdf','e','f','g','h','isdfsdffd','j','k','l','mfdgcvbcvbs','n','o','p','q','r','s','t','uasdfsdf','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9',); $arbitrary_characters = array('cat','table','cool','deloitte','askcue','what','more','less','adjective','nice','clinton','mars','jupiter','testversion','beta','hilarious','lolcatz','funny','obama','president','nice','what','misplaced','category','people','religion','global','skyscraper','new york','dubai','helsinki','volcano','iceland','peter','telephone','internet', 'dialer', 'cord', 'movie', 'party', 'chris', 'guitar', 'bentley', 'ford', 'ferrari', 'etc', 'de facto'); for ($i=0; $i<96; $i++) { $arb_digits = rand (0,45); $arbitrary_character = $arbitrary_characters[$arb_digits]; //$arbitrary_character = "."; echo "<span id=\"wp_$i\" style=\"position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;\" onmouseover=\"magnify_world_pixel('wp_$i');\" onmouseout=\"shrink_world_pixel('wp_$i');\" onClick=\"set_askcue_bar('', '$arbitrary_character')\">$arbitrary_character</span>\n"; } ?>

    Read the article

  • Efficient code to avoid circular references in c# object model

    - by Kumar
    I have an excel like grid where values can be typed referencing other rows To check for circular references when a new value is entered, i traverse the tree and create a list of values referenced thus far, if the current value is found in this list, i return an error thus avoiding a circular reference. This is infrequent enough where extreme performance is not an issue but... Question - is there a better way ? I'm told it's not the most optimal but no answer was provided so on to the experts @ SO :)

    Read the article

  • Most optimized way to calculate modulus in C

    - by hasanatkazmi
    I have minimize cost of calculating modulus in C. say I have a number x and n is the number which will divide x when n == 65536 (which happens to be 2^16): mod = x % n (11 assembly instructions as produced by GCC) or mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions) so, GCC doesn't optimize it to this extent. In my case n is not x^(int) but is largest prime less than 2^16 which is 65521 as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • CSS files that don't end with .css

    - by Yongho
    Is there a disadvantage to using a dynamic Python file to generate the CSS for a webpage? I'd like computers with an administrator cookie to show special admin panel CSS, and show regular CSS for all other users. I'm planning to use: <link rel="stylesheet" href="/css.py" type="text/css" />

    Read the article

  • Fastest way to list all primes below N in python

    - by jbochi
    This is the best algorithm I could come up with after struggling with a couple of Project Euler's questions. def get_primes(n): numbers = set(range(n, 1, -1)) primes = [] while numbers: p = numbers.pop() primes.append(p) numbers.difference_update(set(range(p*2, n+1, p))) return primes >>> timeit.Timer(stmt='get_primes.get_primes(1000000)', setup='import get_primes').timeit(1) 1.1499958793645562 Can it be made even faster? EDIT: This code has a flaw: Since numbers is an unordered set, there is no guarantee that numbers.pop() will remove the lowest number from the set. Nevertheless, it works (at least for me) for some input numbers: >>> sum(get_primes(2000000)) 142913828922L #That's the correct sum of all numbers below 2 million >>> 529 in get_primes(1000) False >>> 529 in get_primes(530) True EDIT: The rank so far (pure python, no external sources, all primes below 1 million): Sundaram's Sieve implementation by myself: 327ms Daniel's Sieve: 435ms Alex's recipe from Cookbok: 710ms EDIT: ~unutbu is leading the race.

    Read the article

  • Efficient algorithm for creating an ideal distribution of groups into containers?

    - by Inshim
    I have groups of students that need to be allocated into classrooms of a fixed capacity (say, 100 chairs in each). Each group must only be allocated to a single classroom, even if it is larger than the capacity (ie there can be an overflow, with students standing up) I need an algorithm to make the allocations with minimum overflows and under-capacity classrooms. A naive algorithm to do this allocation is horrendously slow when having ~200 groups, with a distribution of about half of them being under 20% of the classroom size. Any ideas where I can find at least some good starting point for making this algorithm lightning fast? Thanks!

    Read the article

  • Create date efficiently

    - by Dave Jarvis
    On Pavel's page is the following function: CREATE OR REPLACE FUNCTION makedate(year int, dayofyear int) RETURNS date AS $$ SELECT (date '0001-01-01' + ($1 - 1) * interval '1 year' + ($2 - 1) * interval '1 day'):: date $$ LANGUAGE sql; I have the following code: makedate(y.year,1) What is the fastest way in PostgreSQL to create a date for January 1st of a given year? Pavel's function would lead me to believe it is: date '0001-01-01' + y.year * interval '1 year' + interval '1 day'; My thought would be more like: to_date( y.year||'-1-1', 'YYYY-MM-DD'); Am looking for the fastest way using PostgreSQL 8.4. (The query that uses the date function can select between 100,000 and 1 million records, so it needs speed.) Thank you!

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • speeding up website load using multiple servers/domains

    - by Mohammad
    When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective". And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain. Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?

    Read the article

  • How to simply this logic/code?

    - by Tattat
    I want to write an apps that accepts user command. The user command is used in this format: command -parameter For example, the app can have "Copy", "Paste", "Delete" command I am thinking the program should work like this : public static void main(String args[]){ if(args[0].equalsIgnoreCase("COPY")){ //handle the copy command } else if(args[0].equalsIgnoreCase("PASTE")){ //handle the copy command }/** code skipped **/ } So, it works, but I think it will become more and more complex when I have more command in my program, also, it is different to read. Any ideas to simply the logic?

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • MYSQL OR vs IN performance

    - by Scott
    I am wondering if there is any difference in regards to performance between the following SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4) SELECT ... FROM ... WHERE someFIELD between 0 AND 5 SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ... or will MySQL optimize the SQL in the same way compilers will optimize code ? EDIT: Changed the AND's to OR's for the reason stated in the comments.

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • Require help in Writing Query

    - by harigm
    The following image have been uploaded to show what I am trying to do and what I wanted out of it Can any one help me write the Query to get the results what I want Please check the following SELECT * FROM KPT WHERE PROPERTY_ID IN (SELECT PROPERTY_ID FROM khata_header WHERE DIV_ID = 3 and RECORD_STATUS = 0) and CHALLAN_NO > 42646 The above is the query I have written and I have got the following result set ID CHALLAN_NO PROPERTY_ID SITE_NO TOTAL_AMOUNT ----- ------------- -------------- ------------------- --------------- 1242 42757 3103010141 296 595 1243 63743 3204190257 483 594 1244 63743 3204190257 483 594 1334 43395 3217010223 1088 576 1421 524210 3320050416 (null) (null) 1422 524210 3320050416 (null) (null) 1560 564355 3320021408 (null) (null) 1870 516292 3320040420 (null) (null) 1940 68357 3217100104 139 1153 1941 68357 3217100104 139 1153 2002 56256 3320100733 511 4430 2003 56256 3320100733 511 4430 2004 66488 3217040869 293 3094 2005 66488 3217040869 293 3094 2016 64571 3217040374 (null) (null) 2036 523122 3320020352 (null) (null) 2039 65682 3217040021 273 919 In my resultset, I am getting the PropertyId repeated, since there are multilple entries, How Can I know How many have been repeated What are those Property Id which have repeated more than 2 times. Little Back ground about the tables are PROPERTY_ID is the FK in the KPT PROPERTY_ID is the PK in KH I am writing a subquery to get the Result, so I am stuck I dont know how to get my results Please help

    Read the article

  • exchanging 2 memory positions

    - by Jordi
    I am working with OpenCV and Qt, Opencv use BGR while Qt uses RGB , so I have to swap those 2 bytes for very big images. There is a better way of doing the following? I can not think of anything faster but looks so simple and lame... int width = iplImage->width; int height = iplImage->height; uchar *iplImagePtr = (uchar *) iplImage->imageData; uchar buf; int limit = height * width; for (int y = 0; y < limit; ++y) { buf = iplImagePtr[2]; iplImagePtr[2] = iplImagePtr[0]; iplImagePtr[0] = buf; iplImagePtr += 3; } QImage img((uchar *) iplImage->imageData, width, height, QImage::Format_RGB888);

    Read the article

  • How fast can you make linear search?

    - by Mark Probst
    I'm looking to optimize this linear search: static int linear (const int *arr, int n, int key) { int i = 0; while (i < n) { if (arr [i] >= key) break; ++i; } return i; } The array is sorted and the function is supposed to return the index of the first element that is greater or equal to the key. They array is not large (below 200 elements) and will be prepared once for a large number of searches. Array elements after the n-th can if necessary be initialized to something appropriate, if that speeds up the search. No, binary search is not allowed, only linear search.

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Simple MySQL Query taking 45 seconds (Gets a record and its "latest" child record)

    - by Brian Lacy
    I have a query which gets a customer and the latest transaction for that customer. Currently this query takes over 45 seconds for 1000 records. This is especially problematic because the script itself may need to be executed as frequently as once per minute! I believe using subqueries may be the answer, but I've had trouble constructing it to actually give me the results I need. SELECT customer.CustID, customer.leadid, customer.Email, customer.FirstName, customer.LastName, transaction.*, MAX(transaction.TransDate) AS LastTransDate FROM customer INNER JOIN transaction ON transaction.CustID = customer.CustID WHERE customer.Email = '".$email."' GROUP BY customer.CustID ORDER BY LastTransDate LIMIT 1000 I really need to get this figured out ASAP. Any help would be greatly appreciated!

    Read the article

  • SQL-query task, decision?

    - by Sirius Lampochkin
    There is a table of currencies rates in MS SQL Server 2005: ID | CURR | RATE | DATE 1   | USD   | 30      | 01.10.2010 3   | GBP   | 45      | 07.10.2010 5   | USD   | 31      | 08.10.2010 7   | GBP   | 46      | 09.10.2010 9   | USD   | 32      | 12.10.2010 11 | GBP   | 48      | 03.10.2010 Rate are updated in real time and there are more than 1 billion rows in the table. It needs to write a SQL-query, wich will provide latest rates per each currency. My decision is: SELECT c.[id],c.[curr],c.[rate],c.[date] FROM [curr_rate] c, (SELECT curr, MAX(date) AS rate_date FROM [curr_rate] GROUP BY curr) t WHERE c.date = t.rate_date AND c.curr = t.curr ORDER BY c.[curr] ASC Is it possible to write a query without sub-queries and join's with derived tables?

    Read the article

  • Cannot sort a row of size 8130, which is greater than the allowable maximum of 8094

    - by Sri Kumar
    Hello All, SELECT DISTINCT tblJobReq.JobReqId, tblJobReq.JobStatusId, tblJobClass.JobClassId, tblJobClass.Title, tblJobReq.JobClassSubTitle, tblJobAnnouncement.JobClassDesc, tblJobAnnouncement.EndDate, tblJobAnnouncement.AgencyMktgVerbage, tblJobAnnouncement.SpecInfo, tblJobAnnouncement.Benefits, tblSalary.MinRateSal, tblSalary.MaxRateSal, tblSalary.MinRateHour, tblSalary.MaxRateHour, tblJobClass.StatementEval, tblJobReq.ApprovalDate, tblJobReq.RecruiterId, tblJobReq.AgencyId FROM ((tblJobReq LEFT JOIN tblJobAnnouncement ON tblJobReq.JobReqId =tblJobAnnouncement.JobReqId) INNER JOIN tblJobClass ON tblJobReq.JobClassId = tblJobClass.JobClassId) LEFT JOIN tblSalary ON tblJobClass.SalaryCode = tblSalary.SalaryCode WHERE (tblJobReq.JobClassId in (SELECT JobClassId from tblJobClass WHERE tblJobClass.Title like '%Family Therapist%')) When i try to execute the query it results in the following error. Cannot sort a row of size 8130, which is greater than the allowable maximum of 8094 I checked and didn't find any solution. The only way is to truncate (substring())the "tblJobAnnouncement.JobClassDesc" in the query which has column size of around 8000. Do we have any work around so that i need not truncate the values. Or Can this query be optimised? Any setting in SQL Server 2000?

    Read the article

  • Representing game states in Tic Tac Toe

    - by dacman
    The goal of the assignment that I'm currently working on for my Data Structures class is to create a of Quantum Tic Tac Toe with an AI that plays to win. Currently, I'm having a bit of trouble finding the most efficient way to represent states. Overview of current Structure: AbstractGame Has and manages AbstractPlayers (game.nextPlayer() returns next player by int ID) Has and intializes AbstractBoard at the beginning of the game Has a GameTree (Complete if called in initialization, incomplete otherwise) AbstractBoard Has a State, a Dimension, and a Parent Game Is a mediator between Player and State, (Translates States from collections of rows to a Point representation Is a StateConsumer AbstractPlayer Is a State Producer Has a ConcreteEvaluationStrategy to evaluate the current board StateTransveralPool Precomputes possible transversals of "3-states". Stores them in a HashMap, where the Set contains nextStates for a given "3-state" State Contains 3 Sets -- a Set of X-Moves, O-Moves, and the Board Each Integer in the set is a Row. These Integer values can be used to get the next row-state from the StateTransversalPool SO, the principle is Each row can be represented by the binary numbers 000-111, where 0 implies an open space and 1 implies a closed space. So, for an incomplete TTT board: From the Set<Integer> board perspective: X_X R1 might be: 101 OO_ R2 might be: 110 X_X R3 might be: 101, where 1 is an open space, and 0 is a closed space From the Set<Integer> xMoves perspective: X_X R1 might be: 101 OO_ R2 might be: 000 X_X R3 might be: 101, where 1 is an X and 0 is not From the Set<Integer> oMoves perspective: X_X R1 might be: 000 OO_ R2 might be: 110 X_X R3 might be: 000, where 1 is an O and 0 is not Then we see that x{R1,R2,R3} & o{R1,R2,R3} = board{R1,R2,R3} The problem is quickly generating next states for the GameTree. If I have player Max (x) with board{R1,R2,R3}, then getting the next row-states for R1, R2, and R3 is simple.. Set<Integer> R1nextStates = StateTransversalPool.get(R1); The problem is that I have to combine each one of those states with R1 and R2. Is there a better data structure besides Set that I could use? Is there a more efficient approach in general? I've also found Point<-State mediation cumbersome. Is there another approach that I could try there? Thanks! Here is the code for my ConcretePlayer class. It might help explain how players produce new states via moves, using the StateProducer (which might need to become StateFactory or StateBuilder). public class ConcretePlayerGeneric extends AbstractPlayer { @Override public BinaryState makeMove() { // Given a move and the current state, produce a new state Point playerMove = super.strategy.evaluate(this); BinaryState currentState = super.getInGame().getBoard().getState(); return StateProducer.getState(this, playerMove, currentState); } } EDIT: I'm starting with normal TTT and moving to Quantum TTT. Given the framework, it should be as simple as creating several new Concrete classes and tweaking some things.

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >