Search Results

Search found 859 results on 35 pages for 'versus'.

Page 9/35 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Bitwise operators versus .NET abstractions for bit manipulation in C# prespective

    - by Leron
    I'm trying to get basic skills in working with bits using C#.NET. I posted an example yesterday with a simple problem that needs bit manipulation which led me to the fact that there are two main approaches - using bitwise operators or using .NET abstractions such as BitArray (Please let me know if there are more build-in tools for working with bits other than BitArray in .NET and how to find more info for them if there are?). I understand that bitwise operators work faster but using BitArray is something much more easier for me, but one thing I really try to avoid is learning bad practices. Even though my personal preferences are for the .NET abstraction(s) I want to know which i actually better to learn and use in a real program. Thinking about it I'm tempted to think that .NET abstractions are not that bad at, after all there must be reason to be there and maybe being a beginner it's more natural to learn the abstraction and later on improve my skills with low level operations, but this is just random thoughts.

    Read the article

  • Behavior of <- NULL on lists versus data.frames for removing data

    - by Ananda Mahto
    Many R users eventually figure out lots of ways to remove elements from their data. One way is to use NULL, particularly when you want to do something like drop a column from a data.frame or drop an element from a list. Eventually, a user comes across a situation where they want to drop several columns from a data.frame at once, and they hit upon <- list(NULL) as the solution (since using <- NULL will result in an error). A data.frame is a special type of list, so it wouldn't be too tough to imagine that the approaches for removing items from a list should be the same as removing columns from a data.frame. However, they produce different results, as can be seen in the example below. ## Make some small data--two data.frames and two lists cars1 <- cars2 <- head(mtcars)[1:4] cars3 <- cars4 <- as.list(cars2) ## Demonstration that the `list(NULL)` approach works cars1[c("mpg", "cyl")] <- list(NULL) cars1 # disp hp # Mazda RX4 160 110 # Mazda RX4 Wag 160 110 # Datsun 710 108 93 # Hornet 4 Drive 258 110 # Hornet Sportabout 360 175 # Valiant 225 105 ## Demonstration that simply using `NULL` does not work cars2[c("mpg", "cyl")] <- NULL # Error in `[<-.data.frame`(`*tmp*`, c("mpg", "cyl"), value = NULL) : # replacement has 0 items, need 12 Switch to applying the same concept to a list, and compare the difference in behavior. ## Does not fully drop the items, but sets them to `NULL` cars3[c("mpg", "cyl")] <- list(NULL) # $mpg # NULL # # $cyl # NULL # # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 ## *Does* drop the `list` items while this would ## have produced an error with a `data.frame` cars4[c("mpg", "cyl")] <- NULL # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 The main questions I have are, if a data.frame is a list, why does it behave so differently in this scenario? Is there a foolproof way of knowing when an element will be dropped, when it will produce an error, and when it will simply be given a NULL value? Or do we depend on trial-and-error for this?

    Read the article

  • Short names versus long names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • If-else-if versus map

    - by perezvon
    Hi, Suppose I have such an if/else-if chain: if( x.GetId() == 1 ) { } else if( x.GetId() == 2 ) { } // ... 50 more else if statements What I wonder is, if I keep a map, will it be any better in terms of performance? (assuming keys are integers)

    Read the article

  • C++ performance, for versus while

    - by aaa
    hello. In general (or from your experience), is there difference in performance between for and while loops? What if they are doubly/triply nested? Is vectorization (SSE) affected by loop variant in g++ or Intel compilers? Thank you

    Read the article

  • .NET version with 64-bit versus 32-bit assemblies

    - by user54064
    What version of .NET (64-bit vs. 32-bit) will be loaded if some of the assemblies referenced in an app are compiled with 32-bit only (instead of AnyOS) setting? Will the app still run as 64-bit or will it be forced to run as 32-bit if at least one of the referenced assemblies is compiled as 32-bit only? The app is running .NET 3.5.

    Read the article

  • Multithreading, when to yield versus sleep

    - by aaa
    hello. To clarify terminology, yield is when thread gives up its time slice. My platform of interest is POSIX threads, but I think question is general. Suppose I have consumer/producer pattern. If I want to throttle either consumer or producer, which is better to use, sleep or yield? I am mostly interested in efficiency of using either function. Thanks

    Read the article

  • Memcachedb Versus MongoDB Versus CouchDB in terms of file based caching solution?

    - by Scott Faisal
    We need a caching solution that essentially caches data (text files) anywhere from 3 days up to a week based on user preferences and criteria. In this case memory based caching does not make sense to us. We were referred to MemcacheDB however I also thought of some NO SQL solutions. Our current application uses RDMS (MYSQL) and I guess it makes sense to use MemcacheDB however NOSQL does appeal as it is something more on the horizon. However we have not deployed a production level application under NOSQL and the beta stuff does not settle well with management/investors. Any how what are your thoughts and how would you address it? Thank You

    Read the article

  • Routing problem, handling differently online versus local - MVC.net 1.0

    - by VinnyG
    I have there lines in my RegisterToutes : routes.MapRoute("Pages3", "{url1}/{url2}/{url3}", MVC.Page.RedirectTo(), new { url1 = "", url2 = "", url3 = "" }); routes.MapRoute("Pages2", "{url1}/{url2}", MVC.Page.RedirectTo(), new { url1 = "", url2 = "", url3 = "" }); routes.MapRoute("Pages1", "{url1}", MVC.Page.RedirectTo(), new { url1 = "", url2 = "", url3 = "" }); On my local machine and on my demo server (demo.myserver.com/myproject/) it works great for handling the 404, but live(www.mysite.com) it just go to the IIS 404. I have a PageController witch go see if the page exist in the DB and if it don't I return to a 404 view with the status code 404 (Response.StatusCode = 404;) How can I reproduce the same behavior live? Do I need to setup something on IIS? I'm on winserver 2008 using c# and MVC 1.0. Thanks for the help!

    Read the article

  • SDL versus GLFW?

    - by user697111
    What are the pros and cons to each? It seems they serve the same purpose. I have a few demos with each and they seem about the same. Performance or cross platform wise, is one better than the other? The only thing I notice is that SDL seems to have more "helper" libraries (fonts, images, mixer, built in sound support, etc). On its site, GLFW claims to be more "OpenGL" focused, but still have to use a GLEW to get any newer OpenGL features (same with SDL). I guess I'm leaning towards using SDL now (more mature, more features, more community). Are there any reasons I've missed why GLFW stands out and I should use it instead of SDL?

    Read the article

  • blackberry versus iphone development

    - by Blankman
    For those of you who know/experienced both blackberry and iphone development, which platform did you prefer and why? I'm looking for things like debugging ability, api stability, UI development, deployment, IDE, documentations etc.

    Read the article

  • Hibernate and Child Objects (add versus clear)

    - by tyndall
    Lets say I have domain model with Automobile and Wheels. Wheels is a List of Wheels Wheels has a many-to-one relationship to Automobile. If I get an object back from Hibernate and it has 4 wheels. I take that object remove the 4 wheels and add 4. And then Save. If I ask Hibernate for the object again and it returns an auto with 8 wheels... what are we doing wrong? I don't have access to the source for a few days but want to give our Java devs a push in the right direction. Thanks.

    Read the article

  • OpenGL - GL_FRONT versus GL_FRONT_AND_BACK

    - by Drew Noakes
    I'm tinkering with an open source project that uses OpenGL for rendering in 3D. In the construction of the materials I see code like this: // set ambient material reflectance glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, mAmbient); In other examples, this is used: glMaterialfv(GL_FRONT, GL_AMBIENT, mAmbient); So my question is, what is the difference here? Under what circumstances would it look different and, if my volume is enclosed with all normals pointing outwards, is there any performance difference?

    Read the article

  • Read/Write versus Create/Read/Update/Delete permissions difference

    - by archmeta
    From a practical standpoint, is there any real-world difference between Read/Write permissions and Create/Read/Update/Delete permissions? It would seem that if a user had the ability to 'create', he should always have the ability to 'update' or 'delete'? If this is correct, then read/write should always be sufficient, and there is no need to store separate Create/Read/Update/Delete permissions? Are there any real-world use cases in which a user should be given permissions to create but not update, or update but not delete, etc...?

    Read the article

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • append versus resize for numpy array

    - by Abruzzo Forte e Gentile
    Hi all I would like to append a value at the end of my numpy.array. I saw numpy.append function but this performs an exact copy of the original array adding at last my new value. I would like to avoid copies since my arrays are big. I am using resize method and then set the last index available to the new value. Can you confirm that resize is the best way to append a value at the end? Is it not moving memory around someway? Thanks AFG oldSize = myArray,shape(0) myArray.resize( oldSize + 1 ) myArray[oldSize] = newValue

    Read the article

  • Computer science versus software engineering - which?

    - by Will M
    Something I think Jeff & Joel touched on in an early stackoverflow podcast, though I don’t remember if they reached a conclusion: which curriculum is better preparation for a career as a developer and software entrepreneur, computer science in the liberal arts college, or software engineering in the engineering school? or, put another way, which credential should I look for in someone being added to my team, or to hire for my company (if I had one . . . )? Edit note: initial post mistakenly asked to compare computer science with computer engineering, rather than software engineering, and some answers relate to that question.

    Read the article

  • Let and construct versus let in sequence

    - by Stringer
    Consider this OCaml code: let coupe_inter i j cases = let lcases = Array.length cases in let low,_,_ = cases.(i) and _,high,_ = cases.(j) in low,high, Array.sub cases i (j-i+1), case_append (Array.sub cases 0 i) (Array.sub cases (j+1) (lcases-(j+1))) Why the expression let ... and ... in is used in place of a let ... in let ... in sequence (like F# force you to do)? This construct seems quite frequent in OCaml code. Thanks!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >