Search Results

Search found 2962 results on 119 pages for 'wang min'.

Page 3/119 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to select the min value using having key word

    - by LOVE_KING
    I have created the table stu_dep_det CREATE TABLE `stu_dept_cs` ( `s_d_id` int(10) unsigned NOT NULL auto_increment, `stu_name` varchar(15) , `gender` varchar(15) , `address` varchar(15),`reg_no` int(10) , `ex_no` varchar(10) , `mark1` varchar(10) , `mark2` varchar(15) , `mark3` varchar(15) , `total` varchar(15) , `avg` double(2,0), PRIMARY KEY (`s_d_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC AUTO_INCREMENT=8 ; then Inserted the values INSERT INTO `stu_dept_cs` (`s_d_id`, `stu_name`, `gender`, `address`, `reg_no`, `ex_no`, `mark1`, `mark2`, `mark3`, `total`, `avg`) VALUES (1, 'alex', 'm', 'chennai', 5001, 's1', '70', '90', '95', '255', 85), (2, 'peter', 'm', 'chennai', 5002, 's1', '80', '70', '90', '240', 80), (6, 'parv', 'f', 'mumbai', 5003, 's1', '88', '60', '80', '228', 76), (7, 'basu', 'm', 'kolkatta', 5004, 's1', '85', '95', '56', '236', 79); I want to select the min(avg) using having keyword and I have used the following sql statement SELECT * FROM stu_dept_cs s having min(avg) Is it correct or not plz write the correct ans....

    Read the article

  • Find the min max and average of one column of data in python

    - by user1440194
    I have a set of data that looks like this 201206040210 -3461.00000000 -8134.00000000 -4514.00000000 -4394.00000000 0 201206040211 -3580.00000000 -7967.00000000 -4614.00000000 -7876.00000000 0 201206040212 -3031.00000000 -9989.00000000 -9989.00000000 -3419.00000000 0 201206040213 -1199.00000000 -6961.00000000 -3798.00000000 -5822.00000000 0 201206040214 -2940.00000000 -5524.00000000 -5492.00000000 -3394.00000000 0 I want to take the second to last column and find the min, max, and average. Im a little confused on how to use split when the columns are delimited by a space and -. i Figure once i do that i can use min() and max function. I have written a shell script to do the same here #!/bin/ksh awk '{print substr($5,2);}' data' > /data1 sort -n data1 > data2 tail -1 data2 head -1 data2 awk '{sum+=$1} END {print "average = ",sum/NR}' data2 Im just not sure how to do this in python. Thanks

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • select field information with min time value

    - by Scarface
    Hey guys quick question, I thought I was doing the right thing but I keep getting the wrong result. I am trying to simply find the id of the entry with the min time, but I am not getting that entry. $qryuserscount1="SELECT id,min(entry_time) FROM scrusersonline WHERE topic_id='$topic_id'"; $userscount1=mysql_query($qryuserscount1); while ($row2 = mysql_fetch_assoc($userscount1)) { echo $onlineuser= $row2['id']; } That is my query, and it does not work. This however does work which does not make sense to me SELECT id FROM scrusersonline WHERE topic_id='$topic_id' ORDER by entry_time LIMIT 1, can anyone quickly point out what I am doing wrong?

    Read the article

  • 03/20/11 11:24 AM(MONTH/DATE/YEAR HOUR:MIN)

    - by The deals dealer
    I am using the below code: NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateStyle:NSDateFormatterFullStyle]; NSDate *today=[NSDate date]; NSString *string = [dateFormatter stringFromDate:today]; NSLog(@"%@",string); [dateFormatter release]; This will not give me this format 03/20/11 11:24 AM(MONTH/DATE/YEAR HOUR:MIN), if i convert system date and time so the converted string should have such type of information 03/20/11 11:24 AM(MONTH/DATE/YEAR HOUR:MIN).? What should i do?

    Read the article

  • Code Contracts: Hiding ContractException

    - by DigiMortal
    It’s time to move on and improve my randomizer I wrote for an example of static checking of code contracts. In this posting I will modify contracts and give some explanations about pre-conditions and post-conditions. Also I will show you how to avoid ContractExceptions and how to replace them with your own exceptions. As a first thing let’s take a look at my randomizer. public class Randomizer {     public static int GetRandomFromRange(int min, int max)     {         var rnd = new Random();         return rnd.Next(min, max);     }       public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires(min < max, "Min must be less than max");           var rnd = new Random();         return rnd.Next(min, max);     } } We have some problems here. We need contract for method output and we also need some better exception handling mechanism. As ContractException as type is hidden from us we have to switch from ContractException to some other Exception type that we can catch. Adding post-condition Pre-conditions are contracts for method’s input interface. Read it as follows: pre-conditions make sure that all conditions for method’s successful run are met. Post-conditions are contracts for output interface of method. So, post-conditions are for output arguments and return value. My code misses the post-condition that checks return value. Return value in this case must be greater or equal to minimum value and less or equal to maximum value. To make sure that method can run only the correct value I added call to Contract.Ensures() method. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } I think that the line I added does not need any further comments. Avoiding ContractException for input interface ContractException lives in hidden namespace and we cannot see it at design time. But it is common exception type for all contract exceptions that we do not switch over to some other type. The case of Contract.Requires() method is simple: we can tell it what kind of exception we need if something goes wrong with contract it ensures. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       var rnd = new Random();     return rnd.Next(min, max); } Now, if we violate the input interface contract giving min value that is not less than max value we get ArgumentOutOfRangeException. Avoiding ContractException for output interface Output interface is more complex to control. We cannot give exception type there and hope that this type of exception will be thrown if something goes wrong. Instead we have to use delegate that gathers information about problem and throws the exception we expect to be thrown. From documentation you can find the following example about the delegate I mentioned. Contract.ContractFailed += (sender, e) => {     e.SetHandled();     e.SetUnwind(); // cause code to abort after event     Assert.Fail(e.FailureKind.ToString() + ":" + e.DebugMessage); }; We can use this delegate to throw the Exception. Let’s move the code to separate method too. Here is our method that uses now ContractException hiding. public static int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires(min < max, "Min must be less than max");       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );     Contract.ContractFailed += Contract_ContractFailed;       var rnd = new Random();     return rnd.Next(min, max)+1000; } And here is the delegate that creates exception. public static void Contract_ContractFailed(object sender,     ContractFailedEventArgs e) {     e.SetHandled();     e.SetUnwind();       throw new Exception(e.FailureKind.ToString() + ":" + e.Message); } Basically we can do in this delegate whatever we like to do with output interface errors. We can even introduce our own contract exception type. As you can see later then ContractFailed event is very useful at unit testing.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • what does macosx-version-min imply?

    - by Quincy
    When I pass compiler flag "-mmacosx-version-min=10.5", what does it mean? I think it implies the result binary is x86, not ppc, but is it 32 bits or 64 bits? I'm compiling on snow leopard, so default output binary is 64 bits. I'm not passing -universal, it's not 32bit-64bit universal binary, I think.

    Read the article

  • Problem with jquery-1.3.2.min.js JQuery

    - by Geetha
    Hi, I was trying to add a Jquery file into my application and it gave me following error: Warning 22 Error updating JScript IntelliSense: C:\Users\Desktop\jquery-1.3.2.min.js: Object doesn't support this property or method @ 18:9345 Geetha.

    Read the article

  • Kubuntu - Can't move/max/min windows [closed]

    - by GregH
    All of a sudden it seems when ever I open a window on my Kubuntu (9.10) system, the windows dock in the upper left corner and can't be moved. There is nor border on the windows, no min/max/close buttons in the upper right corner of the windows. I tried opening a term window but it seems I can't type in the window. Any ideas what might be causing this?

    Read the article

  • Responsive grid, floated elements move underneath one another when min-width is reached

    - by Francesca
    I'm in the process of creating a responsive site (currently only working on the main content.) I've created two floated divs with percentage based widths. One contains an image which resizes with the browser resizing. The image has a min-width of 100px. Can anyone tell me why when put into a mobile sized width it doesn't drop down? How can I make the image stack underneath the text? JS Fiddle Live site

    Read the article

  • linq min question/bug

    - by jdelator
    Anyone care to guess what currentIndex is at the end of execution? int[] ints = new int[] { -1, 12, 26, 41, 52, -1, -1 }; int currentIndex = ints.Min(); It's 110. Does anyone know why?

    Read the article

  • Second min cost spanning tree.

    - by Evil
    Hello. I'm writing an algorithm for finding the second min cost spanning tree. my idea was as follows: 1) Use kruskals to find lowest MST. 2) Delete the lowest cost edge of the MST. 3) Run kruskals again on the entire graph. 4) return the new MST. My question is this: Will this work? Is there a better way perhaps to do this?

    Read the article

  • how to use sort function the same way as max or min in mathematica

    - by Qiang Li
    Please take a look at the following code: Sort[{1, y, x}, Greater] Max[{1, x, y}] x = 1 y = 2 Sort[{1, y, x}, Greater] Max[{1, x, y}] It is interesting to note that the first Sort always produce a definite result while the first Max does not, even when Greater is specified. Note I have not given any numerical values for x and y. Why is this and how can I have a Sort function behave the same way as the Max (or Min) function? Thanks!

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Hide ticks at Min and Max in WPF Slider

    - by gehho
    Hi, I want to display a Slider ranging from 0.5 to 1.5 with only one tick mark at 1.0 to mark the center and default value. I have defined a Slider as follows: <Slider Minimum="0.5" Maximum="1.5" IsMoveToPointEnabled="True" IsSnapToTickEnabled="False" Orientation="Horizontal" Ticks="1.0" TickPlacement="BottomRight" Value="{Binding SomeProperty, Mode=TwoWay}"/> However, besides a tick mark at 0.0 this Slider also shows tick marks at 0.5 and 1.5, i.e. the Minimum and Maximum values. Is there a way to hide these min/max tick marks?! I checked all properties and tried changing some of them, but did not have success so far. Thanks, gehho.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >