Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 167/809 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • TSQL -- Make it better

    - by user319353
    Hi: -- Very Narrow (all IDs are passed in) IF(@EmpID IS NOT NULL AND @DeptID IS NOT NULL AND @CityID IS NOT NULL) BEGIN SELECT e.EmpName ,d.DeptName ,c.CityName FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE e.EmpID = @EmpID END -- Just 2 IDs passed in ELSE IF(@DeptID IS NOT NULL AND @CityID IS NOT NULL) BEGIN SELECT e.EmpName ,d.DeptName ,NULL AS [CityName] FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE d.deptID = @DeptID END -- Very Broad (just 1 ID passed in) ELSE IF(@CityID IS NOT NULL) BEGIN SELECT e.EmpName ,NULL AS [DeptName] ,NULL AS [CityName] FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE c.CityID = @CityID END -- None (Nothing passed in) ELSE BEGIN SELECT NULL AS [EmpName] ,NULL AS [DeptName] ,NULL AS [CityName] END Question: Is there any better way (OR specifically can I do anything without IF...ELSE condition?

    Read the article

  • How to open DataSet in Visual Studio 2008 faster?

    - by Ekkapop
    When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically. I want to know, Is it has anyway to reduce this waiting time? Hardware - Desktop CPU: Intel Q6600 Memory: 4 GB HDD: 320 GB 7200 rpm OS: Windows XP 32 bit with Service Pack 3

    Read the article

  • excel import query error

    - by pmms
    mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $result=mysql_query("select * from jos_users INTO OUTFILE 'users.csv' FIELDS ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' "); header("Content-type: text/plain"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Content-Transfer-Encoding: binary"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; in the above code in query string i.e string in side mysql_quey we are getting following error Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in C:\wamp\www\samples\mysql_excel\exel_outfile.php on line 8 in query string '\n' charter is not identifying as string thats why above error getting

    Read the article

  • image archive VS image strip

    - by DevA
    Hi, i've noticed that plenty of games / applications (very common on mobile builds) pack numerous images into an image strip. I figured that the advantages in this are making the program more tidy (file system - wise) and reducing (un)installation time. During the runtime of the application, the entire image strip is allocated and copied from FS to RAM. On the contrary, images can be stored in an image archive and unpacked during runtime to a number of image structures in RAM. The way I see it, the image strip approach is less efficient because of worse caching performance and because that even if the optimal rectangle packing algorithm is used, there will be empty spaces between the stored images in the strip, causing a waste of RAM. What are the advantages in using an image strip over using an image archive file?

    Read the article

  • JavaScript replace with callback - performance question

    - by Tomalak
    In JavaScript, you can define a callback handler in regex string replace operations: str.replace(/str[123]|etc/, replaceCallback); Imagine you have a lookup object of strings and replacements. var lookup = {"str1": "repl1", "str2": "repl2", "str3": "repl3", "etc": "etc" }; and this callback function: var replaceCallback = function(match) { if (lookup[match]) return lookup[match]; else return match; } How would you assess the performance of the above callback? Are there solid ways to improve it? Would if (match in lookup) //.... or even return lookup[match] | match; lead to opportunities for the JS compiler to optimize, or is it all the same thing?

    Read the article

  • How do you optimize stunicholls' "Professional dropdown #2" with jquery?

    - by geff_chang
    Link to menu: Professional dropdown #2 I was wondering if these posts Suckerfish meets jQuery or Son of Suckerfish dropdowns in jQuery could optimize the menu above. I need the menu to be optimized for IE6, because when I use the menu as it is, the menu hangs after I click on a menu item that loads a page with heavy processing. It takes too long for the menu to be enabled again. Any ideas?

    Read the article

  • Why is Magento so slow?

    - by mr-euro
    Is Magento usually so terrible slow? This is my first experience with it and the admin panel simply takes ages to load and save changes. It is a default installation with the test data. The server it is hosted on serves other non-Magento sites super fast. What is it about the PHP code that Magento uses that makes it so slow, and what can be done to fix it?

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • OpenMP timer doesn't work on inline assembly code?

    - by Brett
    I'm trying to compare some code samples for speed, and I decided to use the OpenMP timer since I'll eventually be multi threading the code. The timer works great on two of my four code snippets, but not on the other two start=omp_get_wtime(); /*code here*/ finish = omp_get_wtime() - start_time; The four code here sections are serial code, xmmintrin.h code, and two inline assembly codes. The serial and xmminstrin.h code are able to be timed, but the inline assembly codes returns -1.#IND00 for a time. I can't seem to figure out why this is? Thanks for any help or suggestions!

    Read the article

  • io operations in compilers

    - by Aastha
    How are constructs of io operations handled by a compiler? Like the RTL mapping for memory related operations which is done in a compiler at the time of target code generation, where and how exactly is the same done for io operations? How are the appeoaches different for processors supporting MMIO and I/O mapped I/O? Are there any optimizations done for the io operations in compilers?

    Read the article

  • SharePoint's CAML query the "Created By" field with username

    - by yellowblood
    Hey, I have a form for administrators where they insert a user name ("domain\name") and the code gets and sets some information out of it. It's a huge project and some of the lists contain the username as a string ("domain\name"), but some lists only count on the "Created By" column, which is auto-created. I want to know what's the fastest way to query these lists using the username string. I tried to use the same query as the one I use for the first kind of lists and it obviously didn't work - <Where><Eq><FieldRef Name='UserName'/><Value Type='Text'>domain\\username</Value></Eq></Where> Thank you.

    Read the article

  • Bubble sort algorithm implementations (Haskell vs. C)

    - by kingping
    Hello. I have written 2 implementation of bubble sort algorithm in C and Haskell. Haskell implementation: module Main where main = do contents <- readFile "./data" print "Data loaded. Sorting.." let newcontents = bubblesort contents writeFile "./data_new_ghc" newcontents print "Sorting done" bubblesort list = sort list [] False rev = reverse -- separated. To see rev2 = reverse -- who calls the routine sort (x1:x2:xs) acc _ | x1 > x2 = sort (x1:xs) (x2:acc) True sort (x1:xs) acc flag = sort xs (x1:acc) flag sort [] acc True = sort (rev acc) [] False sort _ acc _ = rev2 acc I've compared these two implementations having run both on file with size of 20 KiB. C implementation took about a second, Haskell — about 1 min 10 sec. I have also profiled the Haskell application: Compile for profiling: C:\Temp ghc -prof -auto-all -O --make Main Profile: C:\Temp Main.exe +RTS -p and got these results. This is a pseudocode of the algorithm: procedure bubbleSort( A : list of sortable items ) defined as: do swapped := false for each i in 0 to length(A) - 2 inclusive do: if A[i] > A[i+1] then swap( A[i], A[i+1] ) swapped := true end if end for while swapped end procedure I wonder if it's possible to make Haskell implementation work faster without changing the algorithm (there's are actually a few tricks to make it work faster, but neither implementations have these optimizations)

    Read the article

  • GCC (ld) option to strip unreferenced data/functions

    - by legends2k
    I've written an program which uses a library which has numerous functuions, but I only limited functions from it. GCC is the compiler I use. Once I've created a binary, when I used nm to see the symbols in it, it shows all the unwanted (unreferenced) functions which are never used. How do I removed those unreferenced functions and data from the executable? Is the -s option right? I'm tols that it strips all symbol table and relocation data from the binary, but does this remove the function and data too? I'm not sure on how to verify this too, since after using -s nm doesn't work since it's stripped all sym. table data too.

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • C# Sorting Question

    - by betamoo
    I wonder what is the best C# data structure I should use to sort efficiently? Is it List or Array or what? And why the standard array [] does not implement sort method in it? Thanks

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • Is using iframes to improve page performance an acceptable approach?

    - by Denis Hoctor
    Hi all, I have a complex page that has several user controls like galleries, maps, ads etc. I've tried optimising them by ensuring full separation of html/css/js, placing js at the bottom of the page and trying to ensure I have well written code in all 3 but alas I still have a slow page. It's not really noticeable to a modern browser but can see the stats and IE6/7. So I'm now looking to do what we've done previously for Adtech flash crap - an iframe. Apart from the SEO impact which I'm not worried about in the case of these controls, what do people think of this as an approach? PROS and CONS please. Thanks, Denis

    Read the article

  • Converting python collaborative filtering code to use Map Reduce

    - by Neil Kodner
    Using Python, I'm computing cosine similarity across items. given event data that represents a purchase (user,item), I have a list of all items 'bought' by my users. Given this input data (user,item) X,1 X,2 Y,1 Y,2 Z,2 Z,3 I build a python dictionary {1: ['X','Y'], 2 : ['X','Y','Z'], 3 : ['Z']} From that dictionary, I generate a bought/not bought matrix, also another dictionary(bnb). {1 : [1,1,0], 2 : [1,1,1], 3 : [0,0,1]} From there, I'm computing similarity between (1,2) by calculating cosine between (1,1,0) and (1,1,1), yielding 0.816496 I'm doing this by: items=[1,2,3] for item in items: for sub in items: if sub >= item: #as to not calculate similarity on the inverse sim = coSim( bnb[item], bnb[sub] ) I think the brute force approach is killing me and it only runs slower as the data gets larger. Using my trusty laptop, this calculation runs for hours when dealing with 8500 users and 3500 items. I'm trying to compute similarity for all items in my dict and it's taking longer than I'd like it to. I think this is a good candidate for MapReduce but I'm having trouble 'thinking' in terms of key/value pairs. Alternatively, is the issue with my approach and not necessarily a candidate for Map Reduce?

    Read the article

  • Problem with Linq query and date format

    - by Alan T
    Hi All I have a C# console application written using Visual Studio 2008. My system culture is en-GB. I have a Linq query that looks like this: var myDate = "19-May-2010"; var cus = from x in _dataContext.testTable where x.CreateDate == Convert.ToDateTime(myDate) select x; The resulting SQL query generates and error because it returns the dates as "19/05/2010" which it interprets as an incorrect date. For some reason even though my system culture is set to en-GB it looks like it's trying to intrepret it as a en-US date. Any ideas how I get around this? Thanks. Alan T

    Read the article

  • Optimizing a 3D World Javascript Animation

    - by johnny
    Hi! I've recently come up with the idea to create a tag cloud like animation shaped like the earth. I've extracted the coastline coordinates from ngdc.noaa.gov and wrote a little script that displayed it in my browser. Now as you can imagine, the whole coastline consists of about 48919 points, which my script would individually render (each coordinate being represented by one span). Obviously no browser is capable of rendering this fluently - but it would be nice if I could render as much as let's say 200 spans (twice as much as now) on my old p4 2.8 Ghz (as a representative benchmark). Are there any javascript optimizations I could use in order to speed up the display of those spans? One 'coordinate': <div id="world_pixels"> <span id="wp_0" style="position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;" onmouseover="magnify_world_pixel('wp_0');" onmouseout="shrink_world_pixel('wp_0');" onClick="set_askcue_bar('', 'new york')">new york</span> </div> The script: $(document).ready(function(){ world_pixels = $("#world_pixels span"); world_pixels.spin(); setInterval("world_pixels.spin()",1500); }); z = new Array(); $.fn.spin = function () { for(i=0; i<this.length; i++) { /*actual screen coordinates: x/y/z --> left/font-size/top 300/13/0 300/6/300 | / |/ 0/13/300 ----|---- 600/13/300 /| / | 300/20/300 300/13/600 */ /*scale font size*/ var resize_x = 1; /*scale width*/ var resize_y = 2.5; /*scale height*/ var resize_z = 2.5; var from_left = 300; var from_top = 20; /*actual math coordinates: 1 -1 | / |/ 1 ----|---- -1 /| / | 1 -1 */ //var get_element = document.getElementById(); //var font_size = parseInt(this.style.fontSize); var font_size = parseInt($(this[i]).css("font-size")); var left = parseInt($(this[i]).css("left")); if (coast_line_array[i][1]) { } else { var top = parseInt($(this[i]).css("top")); z[i] = from_top + (top - (300 * resize_z)) / (300 * resize_z); //global beacause it's used in other functions later on var top_new = from_top + Math.round(Math.cos(coast_line_array[i][2]/90*Math.PI) * (300 * resize_z) + (300 * resize_z)); $(this[i]).css("top", top_new); coast_line_array[i][3] = 1; } var x = resize_x * (font_size - 13) / 7; var y = from_left + (left- (300 * resize_y)) / (300 * resize_y); if (y >= 0) { this[i].phi = Math.acos(x/(Math.sqrt(x^2 + y^2))); } else { this[i].phi = 2*Math.PI - Math.acos(x/(Math.sqrt(x^2 + y^2))); i } this[i].theta = Math.acos(z[i]/Math.sqrt(x^2 + y^2 + z[i]^2)); var font_size_new = resize_x * Math.round(Math.sin(coast_line_array[i][4]/90*Math.PI) * Math.cos(coast_line_array[i][0]/180*Math.PI) * 7 + 13); var left_new = from_left + Math.round(Math.sin(coast_line_array[i][5]/90*Math.PI) * Math.sin(coast_line_array[i][0]/180*Math.PI) * (300 * resize_y) + (300 * resize_y)); //coast_line_array[i][6] = coast_line_array[i][7]+1; if ((coast_line_array[i][0] + 1) > 180) { coast_line_array[i][0] = -180; } else { coast_line_array[i][0] = coast_line_array[i][0] + 0.25; } $(this[i]).css("font-size", font_size_new); $(this[i]).css("left", left_new); } } resize_x = 1; function magnify_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*30+"px" }, { duration: 1000 }); } function shrink_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*6+"px" }, { duration: 1000 }); } I'd appreciate any suggestions to optimize my script, maybe there is even a totally different approach on how to go about this. The whole .js file which stores the array for all the coordinates is available on my page, the file is about 2.9 mb, so you might consider pulling the .zip for local testing: metaroulette.com/files/31218.zip metaroulette.com/files/31218.js P.S. the php I use to create the spans: <?php //$arbitrary_characters = array('a','b','c','ddsfsdfsdf','e','f','g','h','isdfsdffd','j','k','l','mfdgcvbcvbs','n','o','p','q','r','s','t','uasdfsdf','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9',); $arbitrary_characters = array('cat','table','cool','deloitte','askcue','what','more','less','adjective','nice','clinton','mars','jupiter','testversion','beta','hilarious','lolcatz','funny','obama','president','nice','what','misplaced','category','people','religion','global','skyscraper','new york','dubai','helsinki','volcano','iceland','peter','telephone','internet', 'dialer', 'cord', 'movie', 'party', 'chris', 'guitar', 'bentley', 'ford', 'ferrari', 'etc', 'de facto'); for ($i=0; $i<96; $i++) { $arb_digits = rand (0,45); $arbitrary_character = $arbitrary_characters[$arb_digits]; //$arbitrary_character = "."; echo "<span id=\"wp_$i\" style=\"position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;\" onmouseover=\"magnify_world_pixel('wp_$i');\" onmouseout=\"shrink_world_pixel('wp_$i');\" onClick=\"set_askcue_bar('', '$arbitrary_character')\">$arbitrary_character</span>\n"; } ?>

    Read the article

  • Most optimized way to calculate modulus in C

    - by hasanatkazmi
    I have minimize cost of calculating modulus in C. say I have a number x and n is the number which will divide x when n == 65536 (which happens to be 2^16): mod = x % n (11 assembly instructions as produced by GCC) or mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions) so, GCC doesn't optimize it to this extent. In my case n is not x^(int) but is largest prime less than 2^16 which is 65521 as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >