Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 186/809 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • MySQL query with 2 COUNT() of other tables with where conditions

    - by Isern Palaus
    Hello, I've a table called sports that contains a list of list of sports, other called seasons that contains the seasons for a specific sport and competitions that have the competitions of a specific sport and season. I need one MySQL query to print the list of sports with how much seasons and competitions has each. My tables structure: sports +--------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(32) | NO | | NULL | | | slug | varchar(45) | NO | | NULL | | | description | varchar(128) | NO | | NULL | | +--------------------+------------------+------+-----+---------+----------------+ seasons +--------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | id_sport | int(10) unsigned | NO | MUL | NULL | | | name | varchar(32) | NO | | NULL | | | slug | varchar(32) | NO | | NULL | | +--------------------+------------------+------+-----+---------+----------------+ competitions +--------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | id_season | int(10) unsigned | NO | MUL | NULL | | | name | varchar(32) | NO | | NULL | | | slug | varchar(64) | NO | | NULL | | | description | varchar(128) | YES | | NULL | | +--------------------+------------------+------+-----+---------+----------------+ The result of my query needs to contain: sports.*, total_seasons (SUM of seasons where seasons.id_sport=sports.id) and total_competitions (SUM of competitions where competitions.id_season=seasons.id AND seasons.id_sport=sports.id). Thank you in advance!

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • Optimizing GDI+ drawing?

    - by user146780
    I'm using C++ and GDI+ I'm going to be making a vector drawing application and want to use GDI+ for the drawing. I'v created a simple test to get familiar with it: case WM_PAINT: GetCursorPos(&mouse); GetClientRect(hWnd,&rct); hdc = BeginPaint(hWnd, &ps); MemDC = CreateCompatibleDC(hdc); bmp = CreateCompatibleBitmap(hdc, 600, 600); SelectObject(MemDC,bmp); g = new Graphics(MemDC); for(int i = 0; i < 1; ++i) { SolidBrush sb(Color(255,255,255)); g->FillRectangle(&sb,rct.top,rct.left,rct.right,rct.bottom); } for(int i = 0; i < 250; ++i) { pts[0].X = 0; pts[0].Y = 0; pts[1].X = 10 + mouse.x * i; pts[1].Y = 0 + mouse.y * i; pts[2].X = 10 * i + mouse.x; pts[2].Y = 10 + mouse.y * i; pts[3].X = 0 + mouse.x; pts[3].Y = (rand() % 600) + mouse.y; Point p1, p2; p1.X = 0; p1.Y = 0; p2.X = 300; p2.Y = 300; g->FillPolygon(&b,pts,4); } BitBlt(hdc,0,0,900,900,MemDC,0,0,SRCCOPY); EndPaint(hWnd, &ps); DeleteObject(bmp); g->ReleaseHDC(MemDC); DeleteDC(MemDC); delete g; break; I'm wondering if I'm doing it right, or if I have areas killing the cpu. Because right now it takes ~ 1sec to render this and I want to be able to have it redraw itself very quickly. Thanks In a real situation would it be better just to figure out the portion of the screen to redraw and only redraw the elements withing bounds of this?

    Read the article

  • Does the compiler optimize the function parameters passed by value?

    - by Naveen
    Lets say I have a function where the parameter is passed by value instead of const-reference. Further, lets assume that only the value is used inside the function i.e. the function doesn't try to modify it. In that case will the compiler will be able to figure out that it can pass the value by const-reference (for performance reasons) and generate the code accordingly? Is there any compiler which does that?

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • MySQL Limiting a query to one consistent value

    - by Lucas Matos
    My current query returns a table like: +------------+ value1 | .... value1 | .... value2 | .... value3 | .... +------------+ I want: +------------+ value1 | .... value1 | .... +------------+ I want to only receive all rows with the first value. Normally I would do a WHERE clause if I knew that value, and I cannot use a LIMIT because each value has a different number of rows. Right now My query looks like "SELECT u.*, n.something, w.* FROM ... AS u, ... AS n, ... AS w WHERE u.id = n.id AND w.val = n.val AND u.desc LIKE '%GET REQUEST VARIABLE%';" This works great, except I get way too many rows and using PHP to do this ruins code portability and is superfluous. Thanks for reading

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • get js file query param from inside it

    - by vsync
    I load this file with some query param like this: src='somefile.js?userId=123' I wrote the below function in 'somefile.js' file that reads the 'userId' query param but I feel this is not the best approach. Frankly, its quite ugly. Is there a better way? function getId(){ var scripts = document.getElementsByTagName('script'), script; for(var i in scripts){ if( scripts.hasOwnProperty(i) && scripts[i].src.indexOf('somefile.js') != -1 ) var script = scripts[i]; } var s = (script.getAttribute.length !== undefined) ? script.getAttribute('src') : script.getAttribute('src', 2); return getQueryParams('userId',s); };

    Read the article

  • mysql query to concat information from 3 tables - getting incorrect result count

    - by iPfaffy
    I have 3 tables in my database. ab_contacts id first_name last_name addressbook_id ab_addressbooks name id co_comments id link_id comment I'd like to create a query that will let me select all the contacts and comments related to them in a given addressbook. To select all the people in a given addressbook, I can use: select count(*) from ab_contacts where addressbook_id = '50'; This returns 8152 people. However, when I run my query: select ab_contacts.first_name, ab_contacts.last_name, ab_contacts.email, ab_addressbooks.name, co_comments.comments from ab_contacts JOIN ab_addressbooks ON (ab_contacts.addressbook_id = ab_addressbooks.id) JOIN co_comments ON (ab_contacts.id = co_comments.link_id) WHERE ab_contacts.addressbook_id = '50';` the format works, but I only get 1045 results. I'm sure there is something I am missing, but I cannot figure it out. Any help would be greatly appreciated.

    Read the article

  • Optimizing JS Array Search

    - by The.Anti.9
    I am working on a Browser-based media player which is written almost entirely in HTML 5 and JavaScript. The backend is written in PHP but it has one function which is to fill the playlist on the initial load. And the rest is all JS. There is a search bar that refines the playlist. I want it to refine as the person is typing, like most media players do. The only problem with this is that it is very slow and laggy as there are about 1000 songs in the whole program and there is likely to be more as time goes on. The original playlist load is an ajax call to a PHP page that returns the results as JSON. Each item has 4 attirbutes: artist album file url I then loop through each object and add it to an array called playlist. At the end of the looping a copy of playlist is created, backup. This is so that I can refine the playlist variable when people refine their search, but still repopulated it from backup without making another server request. The method refine() is called when the user types a key into the searchbox. It flushes playlist and searches through each property (not including url) of each object in the backup array for a match in the string. If there is a match in any of the properties, it appends the information to a table that displays the playlist, and adds it to the object to playlist for access by the actual player. Code for the refine() method: function refine() { $('#loadinggif').show(); $('#library').html("<table id='libtable'><tr><th>Artist</th><th>Album</th><th>File</th><th>&nbsp;</th></tr></table>"); playlist = []; for (var j = 0; j < backup.length; j++) { var sfile = new String(backup[j].file); var salbum = new String(backup[j].album); var sartist = new String(backup[j].artist); if (sfile.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || salbum.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || sartist.toLowerCase().search($('#search').val().toLowerCase()) !== -1) { playlist.push(backup[j]); num = playlist.length-1; $("<tr></tr>").html("<td>" + num + "</td><td>" + sartist + "</td><td>" + salbum + "</td><td>" + sfile + "</td><td><a href='#' onclick='setplay(" + num +");'>Play</a></td>").appendTo('#libtable'); } } $('#loadinggif').hide(); } As I said before, for the first couple of letters typed, this is very slow and laggy. I am looking for ways to refine this to make it much faster and more smooth.

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • how to speed up code??

    - by kaushik
    i want to speed my code compilation..I have searched the internet and heard that psyco is a very tool to improve the speed.i have searched but could get a site for download. i have installed any additional libraries or modules till date in my python.. can psyco user,tell where we can download the psyco and its installation and using procedures?? i use windows vista and python 2.6 does this work on this ??

    Read the article

  • Preventing objects from being linked if they are not needed?

    - by Massif
    I have an ARM project that I'm building with make. I'm creating the list of object files to link based on the names of all of the .c and .cpp files in my source directory. However, I would like to exclude objects from being linked if they are never used. Will the linker exclude these objects from the .elf file automatically even if I include them in the list of objects to link? If not, is there a way to generate a list of only the objects that need to be linked?

    Read the article

  • Speed up this query joining to a table multiple times

    - by Mongus Pong
    Hi, I have this query that (stripped right down) goes something like this : SELECT [Person_PrimaryContact].[LegalName], [Person_Manager].[LegalName], [Person_Owner].[LegalName], [Person_ProspectOwner].[LegalName], [Person_ProspectBDM].[LegalName], [Person_ProspectFE].[LegalName], [Person_Signatory].[LegalName] FROM [Cache] LEFT JOIN [dbo].[Person] AS [Person_Owner] WITH (NOLOCK) ON [Person_Owner].[PersonID] = [Cache].[ClientOwnerID] LEFT JOIN [dbo].[Person] AS [Person_Manager] WITH (NOLOCK) ON [Person_Manager].[PersonID] = [Cache].[ClientManagerID] LEFT JOIN [dbo].[Person] AS [Person_Signatory] WITH (NOLOCK) ON [Person_Signatory].[PersonID] = [Cache].[ClientSignatoryID] LEFT JOIN [dbo].[Person] AS [Person_PrimaryContact] WITH (NOLOCK) ON [Person_PrimaryContact].[PersonID] = [Cache].[PrimaryContactID] LEFT JOIN [dbo].[Person] AS [Person_ProspectOwner] WITH (NOLOCK) ON [Person_ProspectOwner].[PersonID] = [Cache].[ProspectOwnerID] LEFT JOIN [dbo].[Person] AS [Person_ProspectBDM] WITH (NOLOCK) ON [Person_ProspectBDM].[PersonID] = [Cache].[ProspectBDMID] LEFT JOIN [dbo].[Person] AS [Person_ProspectFE] WITH (NOLOCK) ON [Person_ProspectFE].[PersonID] = [Cache].[ProspectFEID] Person is a huge table and each join to it has a pretty significant hit in the execution plan. Is there anyway I can adjust this query so that I am only linking to it once, or at least get SQL Server to scan through it only once?

    Read the article

  • Java - Optimize finding a string in a list

    - by Mark
    I have an ArrayList of objects where each object contains a string 'word' and a date. I need to check to see if the date has passed for a list of 500 words. The ArrayList could contain up to a million words and dates. The dates I store as integers, so the problem I have is attempting to find the word I am looking for in the ArrayList. Is there a way to make this faster? In python I have a dict and mWords['foo'] is a simple lookup without looping through the whole 1 million items in the mWords array. Is there something like this in java? for (int i = 0; i < mWords.size(); i++) { if ( word == mWords.get(i).word ) { mLastFindIndex = i; return mWords.get(i); } }

    Read the article

  • Access SQL query to SELECT from one table and INSERT into another

    - by typoknig
    Below is my query. Access does not like it, giving me the error Syntax error (missing operator) in query expression 'answer WHERE question = 1'. Hopefully you can see what I am trying to do. Please pay particular attention to 3rd, 4th, and 5th lines under the SELECT statement. INSERT INTO Table2 (respondent,1,2,3-1,3-2,3-3,4,5) SELECT respondent, answer WHERE question = 1, answer WHERE question = 2, answer WHERE answer = 'text 1' AND question = 3, answer WHERE answer = 'text 2' AND question = 3, answer WHERE answer = 'text 3' AND question = 3, answer WHERE question = 4, longanswer WHERE question 5 FROM Table1 GROUP BY respondent;

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • Fastest way to compare Objects of type DateTime

    - by radbyx
    I made this. Is this the fastest way to find lastest DateTime of my collection of DateTimes? I'm wondering if there is a method for what i'm doing inside the foreach, but even if there is, I can't see how it can be faster than what i all ready got. List<StateLog> stateLogs = db.StateLog.Where(p => p.ProductID == product.ProductID).ToList(); DateTime lastTimeStamp = DateTime.MinValue; foreach (var stateLog in stateLogs) { int result = DateTime.Compare(lastTimeStamp, stateLog.TimeStamp); if (result < 0) lastTimeStamp = stateLog.TimeStamp; // sæt fordi timestamp er senere }

    Read the article

  • help me with the following sql query

    - by rupeshmalviya
    could somebody correct my following query, i am novice to software development realm, i am to a string builder object in comma separated form to my query but it's not producing desired result qyery is as follows and string cmd = "SELECT * FROM [placed_student] WHERE passout_year=@passout AND company_id=@companyId AND course_id=@courseId AND branch_id IN('" + sb + "')"; StringBuilder sb = new StringBuilder(); foreach (ListItem li in branch.Items) { if (li.Selected == true) { sb.Append(Convert.ToInt32(li.Value) +", "); } } li is integer value of my check box list which are getting generated may be differne at different time ...please also suggest me some good source to learn sql..

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • [PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query]

    - by doniyor
    i need help. i am trying to select from database thru sql statement in play framework, but it gives me error, i cannot figure out where the clue is. here is the code: @Transactional public static Users findByUsernameAndPassword(String username, String password){ String hash = DigestUtils.md5Hex(password); Query q = JPA.em().createNativeQuery("select * from USERS where" + "USERNAME=? and PASSWORD=?").setParameter(1, username).setParameter(2, password); List<Users> users = q.getResultList(); if(users.isEmpty()){ return null; } else{ return users.get(0); here is the eror message: [PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query] can someone help me please! any help i would appreciate! thanks

    Read the article

  • Does a c/c++ compiler optimize constant divisions by power-of-two value into shifts?

    - by porgarmingduod
    Question says it all. Does anyone know if the following... size_t div(size_t value) { const size_t x = 64; return value / x; } ...is optimized into? size_t div(size_t value) { return value >> 6; } Do compilers do this? (My interest lies in GCC). Are there situations where it does and others where it doesn't? I would really like to know, because every time I write a division that could be optimized like this I spend some mental energy wondering about whether precious nothings of a second is wasted doing a division where a shift would suffice.

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >