Search Results

Search found 892 results on 36 pages for 'rising star'.

Page 6/36 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Low Voting {API(Five Star)} Feedback

    - by D.J.
    Is there any module in drupal which provides low rating feedback. eg If someone wants to vote a content as <= 2 (out of 5). Before he does so, there will be a pop up window displaying text "Are you sure you want to rate so low?" etc. If there is no such module then is there any easy way of doing it?

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Why is Star Craft II lagging with these specs? [closed]

    - by Kev
    17.3" FULL HD (1920X1080) LED LCD w/nVIDIA GeForce® GT 425M w/1GB GDDR3 + Intel GMA HD4500 i also got the 640m core i7, i have 8 gb ram but for some damn reason, when there is a batle in progress it appears as if my graphics card or something is not powerful enough, shouldn't these specs be more than enough to handle star craft II? what could i do to improve my laptop? i also got the 64 bit os

    Read the article

  • What is the convention for the star location in reference variables?

    - by Brett Ryan
    Have been learning Objective-C and different books and examples use differing conventions for the location of the star (*) when naming reference variables. MyType* x; MyType *y; MyType*z; // this also works Personally I prefer the first option as it illustrates that x is a "pointer type of MyType". I see the first two used interchangeably, and sometimes in the same code I've seen differing uses of both. I want to know what is the most common convention It's been a very long time since I've programmed in C (15 years) so I can't remember if all variants are legal for C also or if this is Objective-C specific. I'd prefer answers which state why one is better than the other, as how I explained how I read it above.

    Read the article

  • Le Trojan Zeus/SpyEye se renforce encore avec le P2P et un module de partage de données, il devrait être la "star" des malware de 2012

    Le cheval de Troie Zeus/SpyEye se renforce Avec le P2P et un module de partage de données, il devrait être la "star" des malware de 2012 SpyEye refait parler de lui. Ce Cheval de Troie « bancaire » espionne les connexions aux comptes en ligne et dérobe des informations personnelles (login, mot de passe, numéro de cartes bancaires,?). Il peut injecter dans les machines des codes HTML permettant à celui qui les contrôle d'accéder à distance à toutes les données. Mais SpyEye a également pour particularité de cacher les transferts d'argent frauduleux en affichant un solde de compte erroné au client. Le mal agit même après qu'une personne se soit déc...

    Read the article

  • method for specialized pathfinding?

    - by rlbond
    I am working on a roguelike in my (very little) free time. Each level will basically be a few rectangular rooms connected together by paths. I want the paths between rooms to be natural-looking and windy, however. For example, I would not consider the following natural-looking: B X X X XX XX XX AXX I really want something more like this: B X XXXX X X X X AXXXXXXXX These paths must satisfy a few properties: I must be able to specify an area inside which they are bounded, I must be able to parameterize how windy and lengthy they are, The lines should not look like they started at one path and ended at the other. For example, the first example above looks as if it started at A and ended at B, because it basically changed directions repeatedly until it lined up with B and then just went straight there. I was hoping to use A*, but honestly I have no idea what my heuristic would be. I have also considered using a genetic algorithm, but I don't know how practical that method might end up. My question is, what is a good way to get the results I want? Please do not just specify a method like "A*" or "Dijkstra's algorithm," because I also need help with a good heuristic.

    Read the article

  • Finding minimum cut-sets between bounded subgraphs

    - by Tore
    If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. My Motivation The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. A possible solution to the problem is the Kernighan-Lin algorithm: function Kernighan-Lin(G(V,E)): determine a balanced initial partition of the nodes into sets A and B do A1 := A; B1 := B compute D values for all a in A1 and b in B1 for (i := 1 to |V|/2) find a[i] from A1 and b[i] from B1, such that g[i] = D[a[i]] + D[b[i]] - 2*c[a][b] is maximal move a[i] to B1 and b[i] to A1 remove a[i] and b[i] from further consideration in this pass update D values for the elements of A1 = A1 / a[i] and B1 = B1 / b[i] end for find k which maximizes g_max, the sum of g[1],...,g[k] if (g_max > 0) then Exchange a[1],a[2],...,a[k] with b[1],b[2],...,b[k] until (g_max <= 0) return G(V,E) My problem with this algorithm is its runtime at O(n^2 * lg(n)), i am thinking of limiting the nodes in A1 and B1 to the border of each subgraph to reduce the amount of work done. I also dont understand the c[a][b] cost in the algorithm, if a and b do not have an edge between them is the cost assumed to be 0 or infinity, or should i create an edge based on some heuristic. Do you know what c[a][b] is supposed to be when there is no edge between a and b? Do you think my problem is suitable to use a multi level problem? Why or why not? Do you have a good idea for how to reduce the work done with the kernighan-lin algorithm for my problem?

    Read the article

  • Removing the obstacle that yields the best path from a map after A* traversal

    - by David Titarenco
    I traverse a 16x16 maze using my own A* implementation. This is exactly what my program does: http://www.screenjelly.com/watch/fDQh98zMP0c?showTab=share All is well. However, after the traversal, I would like to find out what wall would give me the best alternative path. Apart from removing every block and re-running A* on the maze, what's a clever solution? I was thinking give every wall node (ignored by A*), a tentative F-value, and change the node structure to also have a n-sized list of node *tentative_parent where n is the number of walls in the maze. Could this be viable?

    Read the article

  • Anyone has implemented SMA* search algorithm?

    - by Endy
    I find the algorithm description in AIMA (Artificial Intelligence: A Modern Approach) is not correct at all. What does 'necessary' mean? What is the memory limit? The queue size or processed nodes? What if the current node has no children at all? I am wondering if this algorithm itself is correct or not. Because I searched the Internet and nobody has implemented it yet. Thanks.

    Read the article

  • Travelling Salesman Problem

    - by Arjun Vasudevan
    I'm trying to solve the travelling salesman problem using the following algorithms - DFS, Hill Climbing and A*. I could write up a code for solving it using DFS. Can I have some help in solving it using the other 2 algorithms? I searched for it a lot, on the web.

    Read the article

  • AStar in a specific case in C#

    - by KiTe
    Hello. To an intership, I have use the A* algorithm in the following case : the unit shape is a square of height and width of 1, we can travel from a zone represented by a rectangle from another, but we can't travel outside these predifined areas, we can go from a rectangle to another through a door, represented by a segment on corresponding square edge. Here are the 2 things I already did but which didn't satisfied my boss : 1 : I created the following classes : -a Door class which contains the location of the 2 separated squares and the door's orientation (top, left, bottom, right), -a Map class which contains a door list, a rectangle list representing the walkable areas and a 2D array representing the ground's squares (for additionnal infomations through an enumeration) - classes for the A* algorithm (node, AStar) 2 : -a MapCase class, which contains information about the case effect and doors through an enumeration (with [FLAGS] attribute set on, to be able to cummulate several information on each case) -a Map classes which only contains a 2D array of MapCase classes - the classes for the A* algorithm (still node an AStar). Since the 2 version is better than the first (less useless calculation, better map classes architecture), my boss is not still satisfied about my mapping classes architecture. The A* and node classes are good and easily mainainable, so I don't think I have to explain them deeper for now. So here is my asking : has somebody a good idea to implement the A* with the problem specification (rectangle walkable but with a square unit area, travelling through doors)? He said that a grid vision of the problem (so a 2D array) shouldn't be the correct way to solve the problem. I wish I've been clear while exposing my problem .. Thanks KiTe

    Read the article

  • How to find minimum cut-sets for several subgraphs of a graph of degrees 2 to 4

    - by Tore
    I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices that act as soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. Do you know of a good algorithm for solving this problem or have any suggestions in things i should explore?

    Read the article

  • What does the * symbol do near a function argument and how to use that in others scenarios?

    - by user502052
    I am using Ruby on Rails 3 and I would like to know what means the presence of a *simbol near a function argument and to understand its usages in others scenarios. Example scenario (this method was from the Ruby on Rails 3 framework: def find(*args) return to_a.find { |*block_args| yield(*block_args) } if block_given? options = args.extract_options! if options.present? apply_finder_options(options).find(*args) else case args.first when :first, :last, :all send(args.first) else find_with_ids(*args) end end end

    Read the article

  • Why does A* path finding sometimes go in straight lines and sometimes diagonals? (Java)

    - by Relequestual
    I'm in the process of developing a simple 2d grid based sim game, and have fully functional path finding. I used the answer found in my previous question as my basis for implementing A* path finding. (http://stackoverflow.com/questions/735523/pathfinding-2d-java-game). To show you really what I'm asking, I need to show you this video screen capture that I made. I was just testing to see how the person would move to a location and back again, and this was the result... http://www.screenjelly.com/watch/Bd7d7pObyFo Different choice of path depending on the direction, an unexpected result. Any ideas?

    Read the article

  • A* pathfinder obstacle collision problem

    - by Cheesebaron
    I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up. The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do. I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on. Do you have any suggestions on what to do about this problem?

    Read the article

  • How do I create a graph from this datastructure?

    - by Shawn Mclean
    I took this data structure from this A* tutorial: public interface IHasNeighbours<N> { IEnumerable<N> Neighbours { get; } } public class Path<TNode> : IEnumerable<TNode> { public TNode LastStep { get; private set; } public Path<TNode> PreviousSteps { get; private set; } public double TotalCost { get; private set; } private Path(TNode lastStep, Path<TNode> previousSteps, double totalCost) { LastStep = lastStep; PreviousSteps = previousSteps; TotalCost = totalCost; } public Path(TNode start) : this(start, null, 0) { } public Path<TNode> AddStep(TNode step, double stepCost) { return new Path<TNode>(step, this, TotalCost + stepCost); } public IEnumerator<TNode> GetEnumerator() { for (Path<TNode> p = this; p != null; p = p.PreviousSteps) yield return p.LastStep; } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } I have no idea how to create a simple graph with. How do I add something like the following undirected graph using C#: Basically I'd like to know how to connect nodes. I have my own datastructures that I can already determine the neighbors and the distance. I'd now like to convert that into this posted datastructure so I can run it through the AStar algorithm. I was seeking something more like: Path<EdgeNode> startGraphNode = new Path<EdgeNode>(tempStartNode); startGraphNode.AddNeighbor(someOtherNode, distance);

    Read the article

  • A* algorithm works OK, but not perfectly. What's wrong?

    - by Bart van Heukelom
    This is my grid of nodes: I'm moving an object around on it using the A* pathfinding algorithm. It generally works OK, but it sometimes acts wrongly: When moving from 3 to 1, it correctly goes via 2. When going from 1 to 3 however, it goes via 4. When moving between 3 and 5, it goes via 4 in either direction instead of the shorter way via 6 What can be wrong? Here's my code (AS3): public static function getPath(from:Point, to:Point, grid:NodeGrid):PointLine { // get target node var target:NodeGridNode = grid.getClosestNodeObj(to.x, to.y); var backtrace:Map = new Map(); var openList:LinkedSet = new LinkedSet(); var closedList:LinkedSet = new LinkedSet(); // begin with first node openList.add(grid.getClosestNodeObj(from.x, from.y)); // start A* var curNode:NodeGridNode; while (openList.size != 0) { // pick a new current node if (openList.size == 1) { curNode = NodeGridNode(openList.first); } else { // find cheapest node in open list var minScore:Number = Number.MAX_VALUE; var minNext:NodeGridNode; openList.iterate(function(next:NodeGridNode, i:int):int { var score:Number = curNode.distanceTo(next) + next.distanceTo(target); if (score < minScore) { minScore = score; minNext = next; return LinkedSet.BREAK; } return 0; }); curNode = minNext; } // have not reached if (curNode == target) break; else { // move to closed openList.remove(curNode); closedList.add(curNode); // put connected nodes on open list for each (var adjNode:NodeGridNode in curNode.connects) { if (!openList.contains(adjNode) && !closedList.contains(adjNode)) { openList.add(adjNode); backtrace.put(adjNode, curNode); } } } } // make path var pathPoints:Vector.<Point> = new Vector.<Point>(); pathPoints.push(to); while(curNode != null) { pathPoints.unshift(curNode.location); curNode = backtrace.read(curNode); } pathPoints.unshift(from); return new PointLine(pathPoints); } NodeGridNode::distanceTo() public function distanceTo(o:NodeGridNode):Number { var dx:Number = location.x - o.location.x; var dy:Number = location.y - o.location.y; return Math.sqrt(dx*dx + dy*dy); }

    Read the article

  • Searching in graphs trees with Depth/Breadth first/A* algorithms

    - by devoured elysium
    I have a couple of questions about searching in graphs/trees: Let's assume I have an empty chess board and I want to move a pawn around from point A to B. A. When using depth first search or breadth first search must we use open and closed lists ? This is, a list that has all the elements to check, and other with all other elements that were already checked? Is it even possible to do it without having those lists? What about A*, does it need it? B. When using lists, after having found a solution, how can you get the sequence of states from A to B? I assume when you have items in the open and closed list, instead of just having the (x, y) states, you have an "extended state" formed with (x, y, parent_of_this_node) ? C. State A has 4 possible moves (right, left, up, down). If I do as first move left, should I let it in the next state come back to the original state? This, is, do the "right" move? If not, must I transverse the search tree every time to check which states I've been to? D. When I see a state in the tree where I've already been, should I just ignore it, as I know it's a dead end? I guess to do this I'd have to always keep the list of visited states, right? E. Is there any difference between search trees and graphs? Are they just different ways to look at the same thing?

    Read the article

  • Correct formulation of the A* algorithm

    - by Eli Bendersky
    Hello, I'm looking at definitions of the A* path-finding algorithm, and it seems to be defined somewhat differently in different places. The difference is in the action performed when going through the successors of a node, and finding that a successor is on the closed list. One approach (suggested by Wikipedia, and this article) says: if the successor is on the closed list, just ignore it Another approach (suggested here and here, for example) says: if the successor is on the closed list, examine its cost. If it's higher than the currently computed score, remove the item from the closed list for future examination. I'm confused - which method is correct ? Intuitively, the first makes more sense to me, but I wonder about the difference in definition. Is one of the definitions wrong, or are they somehow isomorphic ?

    Read the article

  • Problems with with A* algorithm

    - by V_Programmer
    I'm trying to implement the A* algorithm in Java. I followed this tutorial,in particular, this pseudocode: http://theory.stanford.edu/~amitp/GameProgramming/ImplementationNotes.html The problem is my code doesn't work. It goes into an infinite loop. I really don't know why this happens... I suspect that the problem are in F = G + H function implemented in Graph constructors. I suspect I am not calculate the neighbor F correclty. Here's my code: List<Graph> open; List<Graph> close; private void createRouteAStar(Unit u) { open = new ArrayList<Graph>(); close = new ArrayList<Graph>(); u.ai_route_endX = 11; u.ai_route_endY = 5; List<Graph> neigh; int index; int i; boolean finish = false; Graph current; int cost; Graph start = new Graph(u.xMap, u.yMap, 0, ManhattanDistance(u.xMap, u.yMap, u.ai_route_endX, u.ai_route_endY)); open.add(start); current = start; while(!finish) { index = findLowerF(); current = new Graph(open, index); System.out.println(current.x); System.out.println(current.y); if (current.x == u.ai_route_endX && current.y == u.ai_route_endY) { finish = true; } else { close.add(current); neigh = current.getNeighbors(); for (i = 0; i < neigh.size(); i++) { cost = current.g + ManhattanDistance(current.x, current.y, neigh.get(i).x, neigh.get(i).y); if (open.contains(neigh.get(i)) && cost < neigh.get(i).g) { open.remove(open.indexOf(neigh)); } else if (close.contains(neigh.get(i)) && cost < neigh.get(i).g) { close.remove(close.indexOf(neigh)); } else if (!open.contains(neigh.get(i)) && !close.contains(neigh.get(i))) { neigh.get(i).g = cost; neigh.get(i).f = cost + ManhattanDistance(neigh.get(i).x, neigh.get(i).y, u.ai_route_endX, u.ai_route_endY); neigh.get(i).setParent(current); open.add(neigh.get(i)); } } } } System.out.println("step"); for (i=0; i < close.size(); i++) { if (close.get(i).parent != null) { System.out.println(i); System.out.println(close.get(i).parent.x); System.out.println(close.get(i).parent.y); } } } private int findLowerF() { int i; int min = 10000; int minIndex = -1; for (i=0; i < open.size(); i++) { if (open.get(i).f < min) { min = open.get(i).f; minIndex = i; System.out.println("min"); System.out.println(min); } } return minIndex; } private int ManhattanDistance(int ax, int ay, int bx, int by) { return Math.abs(ax-bx) + Math.abs(ay-by); } And, as I've said. I suspect that the Graph class has the main problem. However I've not been able to detect and fix it. public class Graph { int x, y; int f,g,h; Graph parent; public Graph(int x, int y, int g, int h) { this.x = x; this.y = y; this.g = g; this.h = h; this.f = g + h; } public Graph(List<Graph> list, int index) { this.x = list.get(index).x; this.y = list.get(index).y; this.g = list.get(index).g; this.h = list.get(index).h; this.f = list.get(index).f; this.parent = list.get(index).parent; } public Graph(Graph gp) { this.x = gp.x; this.y = gp.y; this.g = gp.g; this.h = gp.h; this.f = gp.f; } public Graph(Graph gp, Graph parent) { this.x = gp.x; this.y = gp.y; this.g = gp.g; this.h = gp.h; this.f = g + h; this.parent = parent; } public List<Graph> getNeighbors() { List<Graph> aux = new ArrayList<Graph>(); aux.add(new Graph(x+1, y, g,h)); aux.add(new Graph(x-1, y, g,h)); aux.add(new Graph(x, y+1, g,h)); aux.add(new Graph(x, y-1, g,h)); return aux; } public void setParent(Graph g) { parent = g; } } Little Edit: Using the System.out and the Debugger I discovered that the program ALWAYS is check the same "current" graph, (15,8) which is the (u.xMap, u.yMap) position. Looks like it keeps forever in the first step.

    Read the article

  • Finding good heuristic for A* search

    - by Martin
    I'm trying to find the optimal solution for a little puzzle game called Twiddle (an applet with the game can be found here). The game has a 3x3 matrix with the number from 1 to 9. The goal is to bring the numbers in the correct order using the minimum amount of moves. In each move you can rotate a 2x2 square either clockwise or counterclockwise. I.e. if you have this state 6 3 9 8 7 5 1 2 4 and you rotate the upper left 2x2 square clockwise you get 8 6 9 7 3 5 1 2 4 I'm using a A* search to find the optimal solution. My f() is simply the number of rotations need. My heuristic function already leads to the optimal solution but I don't think it's the best one you can find. My current heuristic takes each corner, looks at the number at the corner and calculates the manhatten distance to the position this number will have in the solved state (which gives me the number of rotation needed to bring the number to this postion) and sums all these values. I.e. You take the above example: 6 3 9 8 7 5 1 2 4 and this end state 1 2 3 4 5 6 7 8 9 then the heuristic does the following 6 is currently at index 0 and should by at index 5: 3 rotations needed 9 is currently at index 2 and should by at index 8: 2 rotations needed 1 is currently at index 6 and should by at index 0: 2 rotations needed 4 is currently at index 8 and should by at index 3: 3 rotations needed h = 3 + 2 + 2 + 3 = 10 But there is the problem, that you rotate 4 elements at once. So there a rare cases where you can do two (ore more) of theses estimated rotations in one move. This means theses heuristic overestimates the distance to the solution. My current workaround is, to simply excluded one of the corners from the calculation which solves this problem at least for my test-cases. I've done no research if really solves the problem or if this heuristic still overestimates in same edge-cases. So my question is: What is the best heuristic you can come up with? (Disclaimer: This is for a university project, so this is a bit of homework. But I'm free to use any resource if can come up with, so it's okay to ask you guys. Also I will credit Stackoverflow for helping me ;) )

    Read the article

  • How to avoid that the robot gets trapped in local minimum?

    - by nesmoht
    Hi, I have some time occupying myself with motion planning for robots, and have for some time wanted to explore the possibility of improving the opportunities as "potential field" method offers. My challenge is to avoid that the robot gets trapped in "local minimum" when using the "potential field" method. Instead of using a "random walk" approach to avoid that the robot gets trapped I have thought about whether it is possible to implement a variation of A* which could act as a sort of guide for precisely to avoid that the robot gets trapped in "local minimum". Is there some of the experiences of this kind, or can refer to literature, which avoids local minimum in a more effective way than the one used in the "random walk" approach.

    Read the article

  • Structure of Astar (A*) graph search data in C#

    - by Shawn Mclean
    How do you structure you graphs/nodes in a graph search class? I'm basically creating a NavMesh and need to generate the nodes from 1 polygon to the other. The edge that joins both polygons will be the node. I'll then run A* on these Nodes to calculate the shortest path. I just need to know how to structure my classes and their properties? I know for sure I wont need to create a fully blown undirected graph with nodes and edges.

    Read the article

  • What is a mantainable way of saving "star rating" in a database?

    - by Montecristo
    I'll use the jQuery plugin for presenting the user with a nice interface The request is to display 5 stars, up to a total score of 10 (2 points per star). By now I thought about using 7/10 as a format for that value, but what if at some point in the future I'll receive a request like We would like to give users more choice, let's increase the total score to 20 (so that each star contributes with a maximum of 4 points) I'll end up with a table with mixed values for the "star rating" column: some will be like 7/10 while others will be like 14/20. Is it ok for you to have this difference in the database and deal with it in the logic layer to have it consistent? Or is preferred another way so that querying the table will not result in inconsistent results outside the application? Maybe floating point values could help me, is it better to store that value as a number less than or equal to one? So in each of the two examples the resulting value stored in the database would be 0,7, as a number, not a varchar, which can be queried also outside the application. What do you think?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >