Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 104/331 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Throwing cats out of windows

    - by AndrewF
    Imagine you're in a tall building with a cat. The cat can survive a fall out of a low story window, but will die if thrown from a high floor. How can you figure out the longest drop that the cat can survive, using the least number of attempts? Obviously, if you only have one cat, then you can only search linearly. First throw the cat from the first floor. If it survives, throw it from the second. Eventually, after being thrown from floor f, the cat will die. You then know that floor f-1 was the maximal safe floor. But what if you have more than one cat? You can now try some sort of logarithmic search. Let's say that the build has 100 floors and you have two identical cats. If you throw the first cat out of the 50th floor and it dies, then you only have to search 50 floors linearly. You can do even better if you choose a lower floor for your first attempt. Let's say that you choose to tackle the problem 20 floors at a time and that the first fatal floor is #50. In that case, your first cat will survive flights from floors 20 and 40 before dying from floor 60. You just have to check floors 41 through 49 individually. That's a total of 12 attempts, which is much better than the 50 you would need had you attempted to use binary elimination. In general, what's the best strategy and it's worst-case complexity for an n-storied building with 2 cats? What about for n floors and m cats? Assume that all cats are equivalent: they will all survive or die from a fall from a given window. Also, every attempt is independent: if a cat survives a fall, it is completely unharmed. This isn't homework, although I may have solved it for school assignment once. It's just a whimsical problem that popped into my head today and I don't remember the solution. Bonus points if anyone knows the name of this problem or of the solution algorithm.

    Read the article

  • Calculate minimum moves to solve a puzzle

    - by Luke
    I'm in the process of creating a game where the user will be presented with 2 sets of colored tiles. In order to ensure that the puzzle is solvable, I start with one set, copy it to a second set, then swap tiles from one set to another. Currently, (and this is where my issue lies) the number of swaps is determined by the level the user is playing - 1 swap for level 1, 2 swaps for level 2, etc. This same number of swaps is used as a goal in the game. The user must complete the puzzle by swapping a tile from one set to the other to make the 2 sets match (by color). The order of the tiles in the (user) solved puzzle doesn't matter as long as the 2 sets match. The problem I have is that as the number of swaps I used to generate the puzzle approaches the number of tiles in each set, the puzzle becomes easier to solve. Basically, you can just drag from one set in whatever order you need for the second set and solve the puzzle with plenty of moves left. What I am looking to do is after I finish building the puzzle, calculate the minimum number of moves required to solve the puzzle. Again, this is almost always less than the number of swaps used to create the puzzle, especially as the number of swaps approaches the number of tiles in each set. My goal is to calculate the best case scenario and then give the user a "fudge factor" (i.e. 1.2 times the minimum number of moves). Solving the puzzle in under this number of moves will result in passing the level. A little background as to how I currently have the game configured: Levels 1 to 10: 9 tiles in each set. 5 different color tiles. Levels 11 to 20: 12 tiles in each set. 7 different color tiles. Levels 21 to 25: 15 tiles in each set. 10 different color tiles. Swapping within a set is not allowed. For each level, there will be at least 2 tiles of a given color (one for each set in the solved puzzle). Is there any type of algorithm anyone could recommend to calculate the minimum number of moves to solve a given puzzle?

    Read the article

  • Heap Behavior in C++

    - by wowus
    Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory? Basically, given that memory usage isn't as much of an issue as performance, should I do this?

    Read the article

  • Calendar Day View in PHP

    - by JamesArmes
    I'm working on adding a day view option to an existing calendar solution. Like many people implementing their own calendars, I am trying to model Google Calendars. They have an excellent calendar solution and their day view provides a lot of flexibility. For most part, the implementation is going well; however, I'm having issues when it comes to conflicting events. Essentially, I want the events to share the same space, side by side. Events that start at the same time should have the longest event first. In the example data set I'm working with, I have four events: A: 10:30 - 11:30 B: 13:30 - 14:30 C: 10:30 - 11:00 D: 10:45 - 14:00 I can handle A, C, and D just fine, the problem comes with D. A should be left of C which should be left of D; each taking one third of the width (it's fixed width so I can do simple math to figure that out). The problem is that B should be under A and C, to the left of D. Ideally, B would take up the same amount of space as A and C (two thirds width), but I would even settle for it only taking up only one third width. My array of events looks similar to the following: $events = array( '1030' => array( 'uniqueID1' => array( 'start_time' => '1030', 'end_time' => '1130', ), 'uniqueID2' => array( 'start_time' => '1030', 'end_time' => '1100', ), ), '1045' => array( 'uniqueID3' => array( 'start_time' => '1045', 'end_time' => '1400', ), ), '1330' => array( 'uniqueID3' => array( 'start_time' => '1330', 'end_time' => '1430', ), ), ); My plan is to add some indexes to each event that include how many events it conflicts with (so I can calculate the width) and which position in that series it should be (so I can calculate the left value). However, that doesn't help the B. I'm thinking I might need an algorithm that uses some basic geometry and matrices, but I'm not sure where to begin. Any help is greatly appreciated.

    Read the article

  • Symfony, in remote host: Error 500. Unknown record property / related component "algorithm" on "sfGu

    - by user248959
    Hi, after deploying, i get the error below after loggingin. Sf 1.3, sfDoctrineGuardPlugin. And i have this schema.yml in config/doctrine: Usuario: inheritance: extends: sfGuardUser type: simple columns: username: type: string(128) notnull: false unique: true nombre_apellidos: string(60) sexo: string(5) fecha_nac: date provincia: string(60) localidad: string(255) email_address: string(255) fotografia: string(255) avatar: string(255) avatar_mensajes: string(255) relations: Usuario: local: user1_id foreign: user2_id refClass: AmigoUsuario equal: true 500 | Internal Server Error | Doctrine_Record_UnknownPropertyException Unknown record property / related component "algorithm" on "sfGuardUser" stack trace * at () in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record/Filter/Standard.php line 55 ... 52. */ 53. public function filterGet(Doctrine_Record $record, $name) 54. { 55. throw new Doctrine_Record_UnknownPropertyException(sprintf('Unknown record property / related component "%s" on "%s"', $name, get_class($record))); 56. } 57. } * at Doctrine_Record_Filter_Standard->filterGet(object('sfGuardUser'), 'algorithm') in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1382 ... 1379. $success = false; 1380. foreach ($this->_table->getFilters() as $filter) { 1381. try { 1382. $value = $filter->filterGet($this, $fieldName); 1383. $success = true; 1384. } catch (Doctrine_Exception $e) {} 1385. } * at Doctrine_Record->_get('algorithm', 1) in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1337 ... 1334. return $this->$accessor($load); 1335. } 1336. } 1337. return $this->_get($fieldName, $load); 1338. } 1339. 1340. protected function _get($fieldName, $load = true) * at Doctrine_Record->get('algorithm') in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/record/sfDoctrineRecord.class.php line 212 ... 209. return call_user_func_array( 210. array($this, $verb), 211. array_merge(array($entityName), $arguments) 212. ); 213. } else { 214. $failed = true; 215. } * at sfDoctrineRecord->__call(array(object('sfGuardUser'), 'get'), array('algorithm')) in n/a line n/a ... * at sfGuardUser->getAlgorithm('getAlgorithm', array()) in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 96 ... 93. */ 94. public function checkPasswordByGuard($password) 95. { 96. $algorithm = $this->getAlgorithm(); 97. if (false !== $pos = strpos($algorithm, '::')) 98. { 99. $algorithm = array(substr($algorithm, 0, $pos), substr($algorithm, $pos + 2)); * at PluginsfGuardUser->checkPasswordByGuard() in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 83 ... 80. } 81. else 82. { 83. return $this->checkPasswordByGuard($password); 84. } 85. } 86. * at PluginsfGuardUser->checkPassword('m') in SF_ROOT_DIR/lib/sfGuardValidatorUserByEmail.class.php line 28 ... 25. { 26. // password is ok? 27. 28. if ($user->checkPassword($password)) 29. { 30. 31. //die("entro"); * at sfGuardValidatorUserByEmail->doClean('m') Any idea? Javi

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Is it possible to shuffle a 2D matrix while preserving row AND column frequencies?

    - by j_random_hacker
    Suppose I have a 2D array like the following: GACTG AGATA TCCGA Each array element is taken from a small finite set (in my case, DNA nucleotides -- {A, C, G, T}). I would like to randomly shuffle this array somehow while preserving both row and column nucleotide frequencies. Is this possible? Can it be done efficiently? [EDIT]: By this I mean I want to produce a new matrix where each row has the same number of As, Cs, Gs and Ts as the corresponding row of the original matrix, and where each column has the same number of As, Cs, Gs and Ts as the corresponding column of the original matrix. Permuting the rows or columns of the original matrix will not achieve this in general. (E.g. for the example above, the top row has 2 Gs, and 1 each of A, C and T; if this row was swapped with row 2, the top row in the resulting matrix would have 3 As, 1 G and 1 T.) It's simple enough to preserve just column frequencies by shuffling a column at a time, and likewise for rows. But doing this will in general alter the frequencies of the other kind. My thoughts so far: If it's possible to pick 2 rows and 2 columns so that the 4 elements at the corners of this rectangle have the pattern XY YX for some pair of distinct elements X and Y, then replacing these 4 elements with YX XY will maintain both row and column frequencies. In the example at the top, this can be done for (at least) rows 1 and 2 and columns 2 and 5 (whose corners give the 2x2 matrix AG;GA), and for rows 1 and 3 and columns 1 and 4 (whose corners give GT;TG). Clearly this could be repeated a number of times to produce some level of randomisation. Generalising a bit, any "subrectangle" induced by a subset of rows and a subset of columns, in which the frequencies of all rows are the same and the frequencies of all columns are the same, can have both its rows and columns permuted to produce a valid complete rectangle. (Of these, only those subrectangles in which at least 1 element is changed are actually interesting.) Big questions: Are all valid complete matrices reachable by a series of such "subrectangle rearrangements"? I suspect the answer is yes. Are all valid subrectangle rearrangements decomposable into a series of 2x2 swaps? I suspect the answer is no, but I hope it's yes, since that would seem to make it easier to come up with an efficient algorithm. Can some or all of the valid rearrangements be computed efficiently? This question addresses a special case in which the set of possible elements is {0, 1}. The solutions people have come up with there are similar to what I have come up with myself, and are probably usable, but not ideal as they require an arbitrary amount of backtracking to work correctly. Also I'm concerned that only 2x2 swaps are considered. Finally, I would ideally like a solution that can be proven to select a matrix uniformly at random from the set of all matrices having identical row frequencies and column frequencies to the original. I know, I'm asking for a lot :)

    Read the article

  • Calculate the number of ways to roll a certain number

    - by helloworld
    I'm a high school Computer Science student, and today I was given a problem to: Program Description: There is a belief among dice players that in throwing three dice a ten is easier to get than a nine. Can you write a program that proves or disproves this belief? Have the computer compute all the possible ways three dice can be thrown: 1 + 1 + 1, 1 + 1 + 2, 1 + 1 + 3, etc. Add up each of these possibilities and see how many give nine as the result and how many give ten. If more give ten, then the belief is proven. I quickly worked out a brute force solution, as such int sum,tens,nines; tens=nines=0; for(int i=1;i<=6;i++){ for(int j=1;j<=6;j++){ for(int k=1;k<=6;k++){ sum=i+j+k; //Ternary operators are fun! tens+=((sum==10)?1:0); nines+=((sum==9)?1:0); } } } System.out.println("There are "+tens+" ways to roll a 10"); System.out.println("There are "+nines+" ways to roll a 9"); Which works just fine, and a brute force solution is what the teacher wanted us to do. However, it doesn't scale, and I am trying to find a way to make an algorithm that can calculate the number of ways to roll n dice to get a specific number. Therefore, I started generating the number of ways to get each sum with n dice. With 1 die, there is obviously 1 solution for each. I then calculated, through brute force, the combinations with 2 and 3 dice. These are for two: There are 1 ways to roll a 2 There are 2 ways to roll a 3 There are 3 ways to roll a 4 There are 4 ways to roll a 5 There are 5 ways to roll a 6 There are 6 ways to roll a 7 There are 5 ways to roll a 8 There are 4 ways to roll a 9 There are 3 ways to roll a 10 There are 2 ways to roll a 11 There are 1 ways to roll a 12 Which looks straightforward enough; it can be calculated with a simple linear absolute value function. But then things start getting trickier. With 3: There are 1 ways to roll a 3 There are 3 ways to roll a 4 There are 6 ways to roll a 5 There are 10 ways to roll a 6 There are 15 ways to roll a 7 There are 21 ways to roll a 8 There are 25 ways to roll a 9 There are 27 ways to roll a 10 There are 27 ways to roll a 11 There are 25 ways to roll a 12 There are 21 ways to roll a 13 There are 15 ways to roll a 14 There are 10 ways to roll a 15 There are 6 ways to roll a 16 There are 3 ways to roll a 17 There are 1 ways to roll a 18 So I look at that, and I think: Cool, Triangular numbers! However, then I notice those pesky 25s and 27s. So it's obviously not triangular numbers, but still some polynomial expansion, since it's symmetric. So I take to Google, and I come across this page that goes into some detail about how to do this with math. It is fairly easy(albeit long) to find this using repeated derivatives or expansion, but it would be much harder to program that for me. I didn't quite understand the second and third answers, since I have never encountered that notation or those concepts in my math studies before. Could someone please explain how I could write a program to do this, or explain the solutions given on that page, for my own understanding of combinatorics? EDIT: I'm looking for a mathematical way to solve this, that gives an exact theoretical number, not by simulating dice

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • Prim's MST algorithm implementation with Java

    - by user1290164
    I'm trying to write a program that'll find the MST of a given undirected weighted graph with Kruskal's and Prim's algorithms. I've successfully implemented Kruskal's algorithm in the program, but I'm having trouble with Prim's. To be more precise, I can't figure out how to actually build the Prim function so that it'll iterate through all the vertices in the graph. I'm getting some IndexOutOfBoundsException errors during program execution. I'm not sure how much information is needed for others to get the idea of what I have done so far, but hopefully there won't be too much useless information. This is what I have so far: I have a Graph, Edge and a Vertex class. Vertex class mostly just an information storage that contains the name (number) of the vertex. Edge class can create a new Edge that has gets parameters (Vertex start, Vertex end, int edgeWeight). The class has methods to return the usual info like start vertex, end vertex and the weight. Graph class reads data from a text file and adds new Edges to an ArrayList. The text file also tells us how many vertecis the graph has, and that gets stored too. In the Graph class, I have a Prim() -method that's supposed to calculate the MST: public ArrayList<Edge> Prim(Graph G) { ArrayList<Edge> edges = G.graph; // Copies the ArrayList with all edges in it. ArrayList<Edge> MST = new ArrayList<Edge>(); Random rnd = new Random(); Vertex startingVertex = edges.get(rnd.nextInt(G.returnVertexCount())).returnStartingVertex(); // This is just to randomize the starting vertex. // This is supposed to be the main loop to find the MST, but this is probably horribly wrong.. while (MST.size() < returnVertexCount()) { Edge e = findClosestNeighbour(startingVertex); MST.add(e); visited.add(e.returnStartingVertex()); visited.add(e.returnEndingVertex()); edges.remove(e); } return MST; } The method findClosesNeighbour() looks like this: public Edge findClosestNeighbour(Vertex v) { ArrayList<Edge> neighbours = new ArrayList<Edge>(); ArrayList<Edge> edges = graph; for (int i = 0; i < edges.size() -1; ++i) { if (edges.get(i).endPoint() == s.returnVertexID() && !visited(edges.get(i).returnEndingVertex())) { neighbours.add(edges.get(i)); } } return neighbours.get(0); // This is the minimum weight edge in the list. } ArrayList<Vertex> visited and ArrayList<Edges> graph get constructed when creating a new graph. Visited() -method is simply a boolean check to see if ArrayList visited contains the Vertex we're thinking about moving to. I tested the findClosestNeighbour() independantly and it seemed to be working but if someone finds something wrong with it then that feedback is welcome also. Mainly though as I mentioned my problem is with actually building the main loop in the Prim() -method, and if there's any additional info needed I'm happy to provide it. Thank you. Edit: To clarify what my train of thought with the Prim() method is. What I want to do is first randomize the starting point in the graph. After that, I will find the closest neighbor to that starting point. Then we'll add the edge connecting those two points to the MST, and also add the vertices to the visited list for checking later, so that we won't form any loops in the graph. Here's the error that gets thrown: Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(Unknown Source) at java.util.ArrayList.get(Unknown Source) at Graph.findClosestNeighbour(graph.java:203) at Graph.Prim(graph.java:179) at MST.main(MST.java:49) Line 203: return neighbour.get(0); in findClosestNeighbour() Line 179: Edge e = findClosestNeighbour(startingVertex); in Prim()

    Read the article

  • Gradient algororithm produces little white dots

    - by user146780
    I'm working on an algorithm to generate point to point linear gradients. I have a rough, proof of concept implementation done: GLuint OGLENGINEFUNCTIONS::CreateGradient( std::vector<ARGBCOLORF> &input,POINTFLOAT start, POINTFLOAT end, int width, int height,bool radial ) { std::vector<POINT> pol; std::vector<GLubyte> pdata(width * height * 4); std::vector<POINTFLOAT> linearpts; std::vector<float> lookup; float distance = GetDistance(start,end); RoundNumber(distance); POINTFLOAT temp; float incr = 1 / (distance + 1); for(int l = 0; l < 100; l ++) { POINTFLOAT outA; POINTFLOAT OutB; float dirlen; float perplen; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * l * 0.5); perpoffset.y = static_cast<float>(nperp.y * l * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); outA.x = end.x + perpoffset.x + diroffset.x; outA.y = end.y + perpoffset.y + diroffset.y; OutB.x = start.x + perpoffset.x - diroffset.x; OutB.y = start.y + perpoffset.y - diroffset.y; for (float i = 0; i < 1; i += incr) { temp = GetLinearBezier(i,outA,OutB); RoundNumber(temp.x); RoundNumber(temp.y); linearpts.push_back(temp); lookup.push_back(i); } for (unsigned int j = 0; j < linearpts.size(); j++) { if(linearpts[j].x < width && linearpts[j].x >= 0 && linearpts[j].y < height && linearpts[j].y >=0) { pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 0] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 1] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 2] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 3] = (GLubyte) 255; } } lookup.clear(); linearpts.clear(); } return CreateTexture(pdata,width,height); } It works as I would expect most of the time, but at certain angles it produces little white dots. I can't figure out what does this. This is what it looks like at most angles (good) http://img9.imageshack.us/img9/5922/goodgradient.png But once in a while it looks like this (bad): http://img155.imageshack.us/img155/760/badgradient.png What could be causing the white dots? Is there maybe also a better way to generate my gradients if no solution is possible for this? Thanks

    Read the article

  • Optimizing apache server load

    - by Jevgeni Smirnov
    We have an issue with a dedicated server load. We have 16 processors with 4 core @ 2.40GHz, if I understood correctly cat /proc/cpuinfo output. Unfortunately, I don't have access to free -m or vmstat. But from top I got that we have 24 GB. And snapshot from top about processes: As far as I see, memory is not used at all. But the cpu is used heavily. Apache consumes most of CPU. Another useful piece of information: Every 1.0s: ps u -C httpd,mysqld,php Tue Mar 27 10:48:19 2012 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 7476 0.0 0.1 446808 37880 ? SNs Mar06 0:43 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf mysql 36061 41.6 2.1 1113672 529876 ? SNl Feb20 21503:48 /opt/zone/sbin/mysqld --basedir=/opt/zone --datadir=/srvdata/mysql --user=mysql --log-error=/srvdata/mysql/dn79.err --pid-file=/srvdata/mysql/mysqld.pid --socket=/tmp/mysql.sock --port=3306 root 37257 0.0 0.0 424056 16840 ? SNs Mar22 1:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 52743 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52744 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52745 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52746 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52747 0.0 0.1 446956 30324 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52980 69.1 1.8 852468 458088 ? RN 10:41 5:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 53483 47.0 0.8 615088 221040 ? RN 10:43 2:05 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 53641 1.8 0.2 446580 54632 ? SN 10:45 0:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54384 81.2 0.9 625828 229972 ? RN 10:45 2:14 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54411 47.7 0.5 535992 142416 ? RN 10:45 1:09 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54470 41.7 0.4 512528 120012 ? RN 10:46 0:54 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54475 0.1 0.1 437016 41528 ? SN 10:46 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54486 1.5 0.2 445636 53916 ? SN 10:46 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54531 2.5 0.2 445424 53012 ? SN 10:46 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54549 0.0 0.0 424188 9188 ? SN 10:46 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54642 0.0 0.0 424188 9200 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54651 0.0 0.0 424188 9188 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54661 0.0 0.0 424188 9208 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54663 6.9 0.2 449936 58560 ? SN 10:47 0:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54666 6.0 0.2 453356 61124 ? SN 10:47 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54667 2.8 0.1 437608 42088 ? SN 10:47 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54670 1.5 0.1 437540 42172 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54672 2.1 0.1 439076 43648 ? SN 10:47 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54709 0.0 0.0 424188 9192 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54711 1.0 0.1 437284 41780 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54712 11.8 0.2 448172 54700 ? SN 10:47 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54720 0.0 0.0 424188 9192 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54721 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54747 9.1 0.2 443568 51848 ? SN 10:48 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54782 1.8 0.1 438708 37896 ? RN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54784 0.0 0.0 424188 9180 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54785 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54789 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54790 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54791 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54792 0.0 0.0 424056 8352 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 Webalizer shows following: What can be done in the following situation? The application is Magento.

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • XDEBUG/PHP doesn't dump profile even when set up properly?

    - by John D.
    I installed xdebug from source, but also tried my package manager (separately) and they both are loaded correctly (verified by restarting Apache and seeing the xdebug copyright info in phpinfo()) but they do not dump profiling information. Out of the 40 different attempts of configuration it logged once or twice but I lost what I did, I tried with first only loading the module in php.ini with no settings, but it didn't log to /tmp/. I tried many different settings but my current is now: xdebug.profiler_enable = Off xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_dir = "/tmp/" xdebug.profiler_output_name = "profiler.%t" Of course I call my script through 127.0.0.1/test.php?XDEBUG_PROFILE, which is for enable_trigger. Do you know why it would not dump profiler information? nobody (Arch Linux) can write to /tmp/ as it has before, so I'm sure it is not a permissions error. Apache's error_log does not tell me anything about xdebug either, as it has loaded correctly. It just does not "work"! EDIT: I made a subfolder "xdebug_profiles" in /tmp/ and chown'ed it to nobody, and now it works flawlessly. I'm not sure why it couldn't write before, I guess it's just a caveat with nobody on Arch. I answered my own question , not enough points to answer it or comment, so consider this answered.

    Read the article

  • CPU's on Hyper-V host system is just idling, even though VM's are at full throttle

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 These three machines are running very sluggishly, they are at 100% cpu even though the host system is barely using any cpu at all, typically below 10% total. Could anyone please give some tips as to the best setup for CPU allocating? Should I have set each server to have two cores, or should I increase this number above the total number of cores on the host? What is a good number to set on the Virtual Machine Reserve and Virtual Machine Limit? Is 8 gigs of physical RAM insufficient for 3 VM's? Thanks for reading. :)

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • Monitoring outgoing bandwidth of application

    - by jnolte
    I currently have a VPS that is consuming a ton of outgoing bandwidth and I am trying to drill down to where this may be coming from. Does anyone know of a logical way to go about finding out which pages on the site are consuming the most outgoing data. We have done a ton of front-end optimizations to the site and our google page speed rankings ar 85% so I feel we have done a pretty great job at optimizing the site for speed. Can someone lend some insight on how they have made similar optimizations? Application / Server Stack LEMP Running Varnish Cache / PHP5-FPM WordPress running w3 Total Cache Ubuntu 12.04 LTS

    Read the article

  • Looking for a short term solution to improve website performance with additional server

    - by Tanim Mirza
    I am working with a small team to run an internal website running with PHP 5.3.9, MySQL 5.0.77. All the files and database are hosted on a dedicated Linux machine with the following configuration: Intel Xeon E5450 8 CPU cores @3.00GHz, 2992.498 MHz, Cache 6148 KB, Cent OS – Red Hat Enterprise Linux Server release 5.4 We started small and then the database got bigger and now the website performance degraded significantly. We often get server space overrun, mysql overloaded with too many calls, etc. We don't have much experience dealing with these issues. We recently got another server that we were thinking to use to improve performance. Since it has better configuration, some of us wanted to completely move everything to the new machine. But I am trying to find out how we can utilize both machine for optimized performance. I found options such as MySQL clustering, Load balancer, etc. I was wondering if I could get any suggestion for this situation "How to utilize two machines in short term for best performance", that would be great. By short term we are looking for something that we can deploy in a month or so. Thanks in advance for your time.

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • Optimize windows 2008 performance

    - by Giorgi
    Hello, I have windows server 2008 sp2 installed as virtual machine on my personal laptop. I use it only for source control (visual svn) and continuous integration (teamcity). As the virtual machine resources are limited I'd like to optimize it's performance by disabling services and features that are not necessary for my purposes. Can anyone recommend where to start or provide with tips for getting better performance. Thanks.

    Read the article

  • Open Source Scheduling Software?

    - by Kaiser Advisor
    Hi Everyone, I'm looking for scheduling software to schedule 25 people over 8 work sites. Most are FT and can work up to 40 hours a week, but some are part-time and can only work certain days of the week and up to a certain number of hours a week. There are 3 classes of employees: Managers, Supervisors, and Workers. They should be shuffled so that they spend approximately equal time at each of the 8 work sites and with all classes of employees; i.e., Joe the worker should spend about 1 out of 8 days on each work site, and work with managers, supervisors, and other workers equally. I tried to do this in excel with the solver, but the shuffling requirement makes it way too complicated, so I'm stuck trying to do big parts of this manually with the solver helping out with just the hour provisioning piece. Is there any open source software that could help me? Much appreciated! KA

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >