Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 24/933 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • better way to write this

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • Better way to write this SQL

    - by AngryHacker
    I have the following table: create table ARDebitDetail(ID_ARDebitDetail int identity, ID_Hearing int, ID_AdvancedRatePlan int) I am trying to get the latest ID_AdvancedRatePlan based on a ID_Hearing. By latest I mean with the largest ID_ARDebitDetail. I have this query and it works fine. select ID_AdvancedRatePlan from ARDebitDetails where ID_Hearing = 135878 and ID_ARDebitDetail = ( select max(ID_ARDebitDetail) from ARDebitDetails where ID_AdvancedRatePlan > 0 and ID_Hearing = 135878 ) However, it just looks ugly and smells bad. Is there a way to rewrite it in a more concise manner?

    Read the article

  • How is IE7 any better than IE6?

    - by Raul Agrait
    Oftentimes in the web development community, you hear people complaining about developing for IE6. However, if you are developing using a robust JavaScript framework like jQuery, is developing for IE6 any different than developing for IE7?

    Read the article

  • Is it better to use List or Collection?

    - by Vivin Paliath
    I have an object that stores some data in a list. The implementation could change later, and I don't want to expose the internal implementation to the end user. However, the user must have the ability to modify and access this collection of data. Currently I have something like this: public List<SomeDataType> getData() { return this.data; } public void setData(List<SomeDataType> data) { this.data = data; } Does this mean that I have allowed the internal implementation details to leak out? Should I be doing this instead? public Collection<SomeDataType> getData() { return this.data; } public void setData(Collection<SomeDataType> data) { this.data = new ArrayList<SomeDataType>(data); }

    Read the article

  • Which validation framework is better?

    - by Nick Yao
    Does anyone have any recommendations for either of these validation ASP.Net MVC Validation frameworks? xVal: http://xval.codeplex.com/ FluentValidation: http://fluentvalidation.codeplex.com/documentation NHibernate.Validator DataAnnotations by the way: my project use sharp-architecture

    Read the article

  • C++ defines for a 'better' Release mode build in VS

    - by darid
    I currently use the following preprocessor defines, and various optimization settings: WIN32_LEAN_AND_MEAN VC_EXTRALEAN NOMINMAX _CRT_SECURE_NO_WARNINGS _SCL_SECURE_NO_WARNINGS _SECURE_SCL=0 _HAS_ITERATOR_DEBUGGING=0 My question is what other things do fellow SOers use, add, define, in order to get a Release Mode build from VS C++ (2008,2010) to be as performant as possible? btw, I've tried PGO etc, it does help a bit but nothing that comes to parity, also I'm not using streams, the C++ i'm talking about its more like C but making use of templates and STL algorithms. As it stands now very simple code segments flop when compared to what GCC produces on say an equivalent x86 machine running linux (2.6+ kernel) using 02. Side-Note: I believe a lot of the issues relate directly to the STL version (Dinkum) provided by MS. Could people please elaborate on experiences using STLPort etc with VS C++.

    Read the article

  • Better name for CHAR_BIT?

    - by Potatoswatter
    I was just checking an answer and realized that CHAR_BIT isn't defined by headers as I'd expect, not even by #include <bitset>, on newer GCC. Do I really have to #include <climits> just to get the "functionality" of CHAR_BIT?

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • A monkey could do this better - Access to and availability of private member functions in C++

    - by David
    I am wandering the desert of my brain. I'm trying to write something like the following: class MyClass { // Peripherally Related Stuff public: void TakeAnAction(int oneThing, int anotherThing) { switch(oneThing){ case THING_A: TakeThisActionWith(anotherThing); break; //cases THINGS_NOT_A: }; private: void TakeThisActionWith(int thing) { string outcome = new string; outcome = LookUpOutcome(thing); // Do some stuff based on outcome return; } string LookUpOutcome(int key) { string oc = new string; oc = MyPrivateMap[key]; return oc; } map<int, string> MyPrivateMap; Then in the .cc file where I am actually using these things, while compiling the TakeAnAction section, it [CC, the solaris compiler] throws an an error: 'The function LookUpOutcome must have a prototype' and bombs out. In my header file, I have declared 'string LookUpOutcome(int key);' in the private section of the class. I have tried all sorts of variations. I tried to use 'this' for a little while, and it gave me 'Can only use this in non-static member function.' Sadly, I haven't declared anything static and these are all, putatively, member functions. I tried it [on TakeAnAction and LookUp] when I got the error, but I got something like, 'Can't access MyPrivateMap from LookUp'. MyPrivateMap could be made public and I could refer to it directly, I guess, but my sensibility says that is not the right way to go about this [that means that namespace scoped helper functions are out, I think]. I also guess I could just inline the lookup and subsequent other stuff, but my line-o-meter goes on tilt. I'm trying desperately not to kludge it.

    Read the article

  • [jscript] Good (better) substition for setInterval or setTimeout

    - by riffnl
    I've got a masterpage with some nice UI (jQuery) features. One of these options is interefering with my embedded YouTube (or other alike-) objects. On each, in this case, setInterval event the embedded video stops displaying new frames (for like a second). More detail: I've got a "polaroid" gallery (in the header) with only 5 100x100 images in it (test: preloading has no effect on performance) and my gallery will show or hide them (fade-in / fade-out) after a period of time. (test: non-animated display:hide or display:block has no effect on performance). After some testing and I've come to the conclusion that it isn't the "animated" showing or hiding of the pictures, but it's the interval itself (- since altering to display:hide or block had the same result). Perhaps it is my "gallery" "function" on itself ... function poladroid() { if (!galleryHasFocus) { if (galleryMax >= 0) { galleryCurrent++; if (galleryCurrent > galleryMax) { galleryCurrent = 0; showPictures = !showPictures; } if (showPictures) { $('#pic-' + galleryCurrent.toString()).show("slow"); } else { $('#pic-' + galleryCurrent.toString()).hide("slow"); } } } if (!intervalSet) { window.setInterval("poladroid()", 3000); intervalSet = true; } } It's not like my function is doing really awkward stuff is it? So, I was thinking I needed a more "loose" interval function.. but is there an option for it?

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Could I be writing this code better?

    - by Ben Dauphinee
    Is there any website out there somewhere where a programmer such as myself might be able to post pieces of code to be looked at by more experienced people? I am thinking of something that programmers could use to have advice given on how to improve their ability. I really like the atmosphere here, but am not sure that posting code for review here is appropriate.

    Read the article

  • Better algorithm for estimating download time

    - by Scott Smith
    We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded. To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying... I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values. Does anyone know?

    Read the article

  • How to better design it ???

    - by Deepak
    public interface IBasePresenter { } public interface IJobViewPresenter : IBasePresenter { } public interface IActivityViewPresenter : IBasePresenter { } public class BaseView { public IBasePresenter Presenter { get; set; } } public class JobView : BaseView { public IJobViewPresenter JobViewPresenter { get { this.Presenter as IJobViewPresenter;} } } public class ActivityView : BaseView { public IActivityViewPresenter ActivityViewPresenter { get { this.Presenter as IActivityViewPresenter;} } } Lets assume that I need a IBasePresenter property on BaseView. Now this property is inherited by JobView and ActivityView but if I need reference to IJobViewPresenter object in these derived classes then I need to type cast IBasePresenter property to IJobViewPresenter or IActivityPresenter (which I want to avoid) or create JobViewPresenter and ActivityViewPresenter on derived classes (as shown above). I want to avoid type casting in derived classes and still have reference to IJobViewPresenter or IActivityViewPresenter and still have IBasePresenter in BaseView. Is there a way I can achieve it ?

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Better product to develop for, iPad or Android based tablet in a purely business related application

    - by Caylem
    Hey guys Just wondering what you'd suggest as the best platform for developing a business application for a tablet device. The application needs to be multitouch, have access to a maps API, a database on the device. It will not be going on sale in the app store or Androids market, it is purely for specific business task and not for the general consumer. Obviously the options seem to be iPhone OS and the iPad or Android and an Android tablet device. The form factor for the end product requires something in the region of 8 inch+ screen and enough processing power to provide a good experience for the end user. Any help would be much appreciated. Thanks

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

  • A better UPDATE method in LINQ to SQL

    - by Refracted Paladin
    The below is a typical, for me, Update method in L2S. I am still fairly new to a lot of this(L2S & business app development) but this just FEELs wrong. Like there MUST be a smarter way of doing this. Unfortunately, I am having trouble visualizing it and am hoping someone can provide an example or point me in the right direction. To take a stab in the dark, would I have a Person Object that has all these fields as Properties? Then what, though? Is that redundant since L2S already mapped my Person Table to a Class? Is this just 'how it goes', that you eventually end up passing 30 parameters(or MORE) to an UPDATE statement at some point? For reference, this is a business app using C#, WinForms, .Net 3.5, and L2S over SQL 2005 Standard. Here is a typical Update Call for me. This is in a file(BLLConnect.cs) with other CRUD methods. Connect is the name of the DB that holds tblPerson When a user clicks save() this is what is eventually called with all of these fields having, potentially, been updated-- public static void UpdatePerson(int personID, string userID, string titleID, string firstName, string middleName, string lastName, string suffixID, string ssn, char gender, DateTime? birthDate, DateTime? deathDate, string driversLicenseNumber, string driversLicenseStateID, string primaryRaceID, string secondaryRaceID, bool hispanicOrigin, bool citizenFlag, bool veteranFlag, short ? residencyCountyID, short? responsibilityCountyID, string emailAddress, string maritalStatusID) { using (var context = ConnectDataContext.Create()) { var personToUpdate = (from person in context.tblPersons where person.PersonID == personID select person).Single(); personToUpdate.TitleID = titleID; personToUpdate.FirstName = firstName; personToUpdate.MiddleName = middleName; personToUpdate.LastName = lastName; personToUpdate.SuffixID = suffixID; personToUpdate.SSN = ssn; personToUpdate.Gender = gender; personToUpdate.BirthDate = birthDate; personToUpdate.DeathDate = deathDate; personToUpdate.DriversLicenseNumber = driversLicenseNumber; personToUpdate.DriversLicenseStateID = driversLicenseStateID; personToUpdate.PrimaryRaceID = primaryRaceID; personToUpdate.SecondaryRaceID = secondaryRaceID; personToUpdate.HispanicOriginFlag = hispanicOrigin; personToUpdate.CitizenFlag = citizenFlag; personToUpdate.VeteranFlag = veteranFlag; personToUpdate.ResidencyCountyID = residencyCountyID; personToUpdate.ResponsibilityCountyID = responsibilityCountyID; personToUpdate.EmailAddress = emailAddress; personToUpdate.MaritalStatusID = maritalStatusID; personToUpdate.UpdateUserID = userID; personToUpdate.UpdateDateTime = DateTime.Now; context.SubmitChanges(); } }

    Read the article

  • How to change this C++ code to make input work better

    - by Phenom
    cout << "Input street number: "; cin >> streetnum; cout << "Input street name: "; cin >> streetname; cout << "Input resource name: "; cin >> rName; cout << "Input architectural style: "; cin >> aStyle; cout << "Input year built: "; cin >> year; The problem with the above code happens if you enter in spaces between words. For example if I enter "Ampitheater Parkway" for streetname, then it puts "Ampitheater" in streetname, skips the prompt for resource name and enters "Parkway" into the next field. How can I fix this?

    Read the article

  • Embeddable database better than SQLite for java

    - by dexter
    I am creating a web application that is accessing a SQLite database in the server. I also have "clients" that updates this same database. As we know SQLite locks the entire database during INSERTs which are done by the clients and the web application is also trying to make some UPDATEs at the same time. So my problem now is about concurrency in database access. I would like to use an embeddable database like SQLite. Any suggestions.

    Read the article

  • Better Alternative to Telerik Draggable Panel ?

    - by user284523
    When putting a video in a Telerik Draggable Panel, when dragging the panel, on Firefox the video restart all over again because DOM is reconstructed. They don't seem to have an answer to this. Also we can't seem to be able to control the z-index as it doesn't take into account: when moving the panel over other telerik controls, the video slips under. So any other draggable panel that wouldn't have these annoyances ? Telerik doesn't seem to give any answer so we're afraid we're stuck and we cannot afford to wait longer. Currently think about using Yahoo UI.

    Read the article

  • Integer array or struct array - which is better?

    - by MusiGenesis
    In my app, I'm storing Bitmap data in a two-dimensional integer array (int[,]). To access the R, G and B values I use something like this: // read: int i = _data[x, y]; byte B = (byte)(i >> 0); byte G = (byte)(i >> 8); byte R = (byte)(i >> 16); // write: _data[x, y] = BitConverter.ToInt32(new byte[] { B, G, R, 0 }, 0); I'm using integer arrays instead of an actual System.Drawing.Bitmap because my app runs on Windows Mobile devices where the memory available for creating bitmaps is severely limited. I'm wondering, though, if it would make more sense to declare a structure like this: public struct RGB { public byte R; public byte G; public byte B; } ... and then use an array of RGB instead of an array of int. This way I could easily read and write the separate R, G and B values without having to do bit-shifting and BitConverter-ing. I vaguely remember something from days of yore about byte variables being block-aligned on 32-bit systems, so that a byte actually takes up 4 bytes of memory instead of just 1 (but maybe this was just a Visual Basic thing). Would using an array of structs (like the RGB example` above) be faster than using an array of ints, and would it use 3/4 the memory or 3 times the memory of ints?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >