Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 89/933 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • implementing stretchable dialog borders in iphone sdk

    - by Joey
    Hi, I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great. Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions. 1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation. 1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()? 2) Is it generally better to create a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)? b) The pieces separately in 8 different files?

    Read the article

  • What is the best prctice for using security in JAX-WS

    - by kislo_metal
    Here is scenario : I have some web services (JAX-WS) that need to be secured. Currently for authentication needs I providing addition SecurityWService that give authorized user some userid & sessionid that is need to be described in request to other services. It would be more better to use some java security. We have many of them but could not defined what is better to use. Q1 : It is understand that I should use SSL in transport layer, but what should I use for user authorization. Is there is better way to establishing session, validating user etc. ? Here is some key description : Most web services clents is php based. I am using jax-ws implementation as a Stateless session EJB. Deploying to glassfish v3. Q2: what is the best framework / technology for user authorization / authentication in case of using JSF 2.0 and ejb3.1 technologies ( Realms? WSIT? )? Thank You!

    Read the article

  • In Java it seems Public constructors are always a bad coding practice

    - by Adam Gent
    This maybe a controversial question and may not be suited for this forum (so I will not be insulted if you choose to close this question). It seems given the current capabilities of Java there is no reason to make constructors public ... ever. Friendly, private, protected are OK but public no. It seems that its almost always a better idea to provide a public static method for creating objects. Every Java Bean serialization technology (JAXB, Jackson, Spring etc...) can call a protected or private no-arg constructor. My questions are: I have never seen this practice decreed or written down anywhere? Maybe Bloch mentions it but I don't own is book. Is there a use case other than perhaps not being super DRY that I missed? EDIT: I explain why static methods are better. .1. For one you get better type inference. For example See Guava's http://code.google.com/p/guava-libraries/wiki/CollectionUtilitiesExplained .2. As a designer of the class you can later change what is returned with a static method. .3. Dealing with constructor inheritance is painful especially if you have to pre-calculate something.

    Read the article

  • normalize data to scale from 1 to 10

    - by Matjaz Lipus
    I have a following data set: A B N 1 3 10 2 3 5 3 3 1 3 6 5 10 10 1 20 41 5 20 120 9 I'm looking for an excel function that will normalize A and B to N on scale from 1 to 10. In above example it would be 1 of 3 is best so N = 10 2 of 3 is in the middle N = 5 3 of 3 is worst N=1 20 of 120 is in second decade N=9 A = 1 && A <= B B is natural number 1 <= N <= 10

    Read the article

  • Installing Ruby 1.9.1 on Ubuntu?

    - by Björn
    I wonder about installing the latest version of Ruby on Ubuntu 9.04. Now I can run through the ./configure and make stuff fine, but what I wonder about: how to avoid conflicts with the packaging system? For example if some other package I install depends on Ruby, wouldn't the package manager install the (outdated) Ruby package and in the worst case overwrite my files? So I think I need some way to tell Ubuntu that Ruby is in fact already installed?

    Read the article

  • Most "thorough" distribution of points around a circle

    - by hippietrail
    This question is intended to both abstract and focus one approach to my problem expressed at "Find the most colourful image in a collection of images". Imagine we have a set of circles, each has a number of points around its circumference. We want to find a metric that gives a higher rating to a circle with points distributed evenly around the circle. Circles with some points scattered through the full 360° are better but circles with far greater numbers of points in one area compared to a smaller number in another area are less good. The number of points is not limited. Two or more points may coincide. Coincidental points are still relevant. A circle with one point at 0° and one point at 180° is better than a circle with 100 points at 0° and 1000 points at 180°. A circle with one point every degree around the circle is very good. A circle with a point every half degree around the circle is better. In my other (colour based question) it was suggested that standard deviation would be useful but with caveat. Is this a good suggestion and does it cope with the closeness of 359° to 1°?

    Read the article

  • C++ addition overload ambiguity

    - by Nate
    I am coming up against a vexing conundrum in my code base. I can't quite tell why my code generates this error, but (for example) std::string does not. class String { public: String(const char*str); friend String operator+ ( const String& lval, const char *rval ); friend String operator+ ( const char *lval, const String& rval ); String operator+ ( const String& rval ); }; The implementation of these is easy enough to imagine on your own. My driver program contains the following: String result, lval("left side "), rval("of string"); char lv[] = "right side ", rv[] = "of string"; result = lv + rval; printf(result); result = (lval + rv); printf(result); Which generates the following error in gcc 4.1.2: driver.cpp:25: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: String.h:22: note: candidate 1: String operator+(const String&, const char*) String.h:24: note: candidate 2: String String::operator+(const String&) So far so good, right? Sadly, my String(const char *str) constructor is so handy to have as an implicit constructor, that using the explicit keyword to solve this would just cause a different pile of problems. Moreover... std::string doesn't have to resort to this, and I can't figure out why. For example, in basic_string.h, they are declared as follows: template<typename _CharT, typename _Traits, typename _Alloc> basic_string<_CharT, _Traits, _Alloc> operator+(const basic_string<_CharT, _Traits, _Alloc>& __lhs, const basic_string<_CharT, _Traits, _Alloc>& __rhs) template<typename _CharT, typename _Traits, typename _Alloc> basic_string<_CharT,_Traits,_Alloc> operator+(const _CharT* __lhs, const basic_string<_CharT,_Traits,_Alloc>& __rhs); and so on. The basic_string constructor is not declared explicit. How does this not cause the same error I'm getting, and how can I achieve the same behavior??

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • When, if ever, is "number of lines of code" a useful metric?

    - by user15071
    Some people claim that code's worst enemy is its size, and I tend to agree. Yet every day you keep hearing things like I write blah lines of code in a day. I own x lines of code. Windows is x million lines of code. Question: When is "#lines of code" useful? ps: Note that when such statements are made, the tone is "more is better".

    Read the article

  • Should uni provide "correct answer" after programming assignment is due?

    - by Michael Mao
    Hi all: This is my very first subjective question. And I think it is programming related - the assignment is to be written in a programming language. I am not for "getting the full marks out of a subject". I am actually not for a "correct answer", but for a "better solution", so that I can compare, and can improve. I reckon it is good that I practice programming first and check the solution later to pick up the things I've done wrong/bad. Without a "benchmark" to against, this would be much harder. Unfortunately as far as I know, not all programming subjects taught in uni would kindly provide the students with a "correct answer" in the end, after the assignment is due. One bad metaphor for this is like someone asks you a question which they don't have a clear answer themselves and hope to take advantage of your answer as the basis for their answer. Personally, I feel having a assignment solution provided by the academic staff is essential to students. I do appreciate this, and I feel I might not be the only one. I am a very proactive student in uni. I learn more, I practice more, an assignment for me is more like a challenge to achieve "the best solution I can come up with", not something "I have to pass"... The cause of this question is that for the past few days I have crafted 500+ lines of Perl code, for a tiny assignment. I feel pain when I look at my solution(not finished yet) and I feel like I am an idiot doing some crap code. I know there must be a much better solution. And I reckon it is better for the lecturer in this subject to get me one, rather than asking for an answer here, even I would shamelessly add the link to my solution apart from the assignment requirements. I know in SO, there are a lot of tutors/lecturers for programming subjects/courses. I'd like to hear your words on this question.

    Read the article

  • What is the technical skill degree of your co-workers?

    - by bonefisher
    For now it has been around 4 years that I work as developer. Most of my team mates, from their tech-skill, programming ability and code practices view, are somewhere between junior and senior. In all my previous jobs, there was a real geek who was brilliant at coding/analyzing/lead, but the others were just 'average' programmers. How would you rank your co-workers as good developers from rank 1 (best) - 5 (worst) ?

    Read the article

  • Step math function

    - by ekapek
    Hi I need function which returns: for any number from range = result [0.001,0.01) => 0.01 [0.01,0.1) => 0.1 [0.1,1) => 1 [1,10) => 10 [10,100) => 100 etc. My first idea was to use if, but this the worst way. Is there a simple solution?

    Read the article

  • The server method xxx failed (asp.net ajax)

    - by vikasde
    I have a website project, that is working fine. However every once a while certain ajax pages (that make calls to a webservice) throw a "the server method xxx failed". I have ELMAH installed, however do not see any stacktrace or anything. The worst is that I can not reproduce the error locally. I just get an email notification from ELMAH. Does anybody know how I can fix this issue?

    Read the article

  • Many tooltips after upgrade to Eclipse 3.5

    - by javapowered
    I've upgraded to Eclipse 3.5 and now tooltips are displayed for everything - for any field, for any method, for anything. The worst thing is that they are displaying immediately after i've moved the mouse (no delay!) So almost always I do see some tooltip, this is very inconvenient. How can I either increase the delay or disable tooltips?

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • Threshold of blurry image - part 2

    - by 1''
    How can I threshold this blurry image to make the digits as clear as possible? In a previous post, I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right): Since then, I've tried using a morphological closing operation as described in this post to make the brightness of the image uniform: If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold: This is a lot better than before, but I have two problems: I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold. I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image: Edit: remi suggested morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:

    Read the article

  • C++ Namespaces & templates question

    - by Kotti
    Hi! I have some functions that can be grouped together, but don't belong to some object / entity and therefore can't be treated as methods. So, basically in this situation I would create a new namespace and put the definitions in a header file, the implementation in cpp file. Also (if needed) I would create an anonymous namespace in that cpp file and put all additional functions that don't have to be exposed / included to my namespace's interface there. See the code below (probably not the best example and could be done better with another program architecture, but I just can't think of a better sample...) Sample code (header) namespace algorithm { void HandleCollision(Object* object1, Object* object2); } Sample code (cpp) #include "header" // Anonymous namespace that wraps // routines that are used inside 'algorithm' methods // but don't have to be exposed namespace { void RefractObject(Object* object1) { // Do something with that object // (...) } } namespace algorithm { void HandleCollision(Object* object1, Object* object2) { if (...) RefractObject(object1); } } So far so good. I guess this is a good way to manage my code, but I don't know what should I do if I have some template-based functions and want to do basically the same. If I'm using templates, I have to put all my code in the header file. Ok, but how should I conceal some implementation details then? Like, I want to hide RefractObject function from my interface, but I can't simply remove it's declaration (just because I have all my code in a header file)... The only approach I came up with was something like: Sample code (header) namespace algorithm { // Is still exposed as a part of interface! namespace impl { template <typename T> void RefractObject(T* object1) { // Do something with that object // (...) } } template <typename T, typename Y> void HandleCollision(T* object1, Y* object2) { impl::RefractObject(object1); // Another stuff } } Any ideas how to make this better in terms of code designing?

    Read the article

  • Using an ObjectCollection as a parameter create a new Control?

    - by Luis
    I was using something like public int Test(System.Windows.Forms.ListBox.ObjectCollection Colecction) { } With this I want to pass just the ObjectCollection of the control, to sort, add and delete elements without passing the entire control, but someone told me that, this way of calling the collection, actualy, create an entire ListBox, making it a worst decition, than, passing a ListBox as a parameter. Is it true? An if, what's the best way of working whit the collection?

    Read the article

  • An O(1) Sort ~~~

    - by FlySwat
    Before you stone me for being a heretic, There is a sort that proclaims to be O(1), called "Bead Sort" (http://en.wikipedia.org/wiki/Bead_sort) , however that is pure theory, when actually applied I found that it was actually O(N * M), which is pretty pathetic. That said, Lets list out some of the fastest sorts, and their best case and worst case speed in Big O notation. ~~ FlySwat ~~

    Read the article

  • Suggestions on error handling of Win32 C++ code: AtlThrow vs. STL exceptions

    - by EmbeddedProg
    In writing Win32 C++ code, I'd appreciate some hints on how to handle errors of Win32 APIs. In particular, in case of a failure of a Win32 function call (e.g. MapViewOfFile), is it better to: use AtlThrowLastWin32 define a Win32Exception class derived from std::exception, with an added HRESULT data member to store the HRESULT corresponding to value returned by GetLastError? In this latter case, I could use the what() method to return a detailed error string (e.g. "MapViewOfFile call failed in MyClass::DoSomething() method."). What are the pros and cons of 1 vs. 2? Is there any other better option that I am missing? As a side note, if I'd like to localize the component I'm developing, how could I localize the exception what() string? I was thinking of building a table mapping the original English string returned by what() into a Unicode localized error string. Could anyone suggest a better approach? Thanks much for your insights and suggestions.

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >