Search Results

Search found 8638 results on 346 pages for 'f vs c'.

Page 109/346 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • MacOSX: OSAtomic vs OSAtomicBarrier

    - by anon
    For the functions here: #include <libkern/OSAtomic.h> there are OSAtomic and OSAtomicBarrier versions. However, the documentation does not show sample code for: When is it safe to use just OSAtomic, without the OSAtomicBarrier version When is it that OSAtomic would be unsafe, but OSAtomiBarrier would be safe. Can anyone provide explainations + sample codes? [Random ramblings of "your opinion" without actual code is useless. Readers: please down vote such answers; and vigrously upvote answers with actual code.] [C/C++ code preferred; Assembly okay too.]

    Read the article

  • <asp:Table> Vs html <table>

    - by keith
    What are the pros and cons between using the ASP.Net control compared to the old reliable table html implementation. I know that the asp:Table will end up on the returned page as a html table, and from looking into it so far people are saying its easier to work with the asp:Table in the server side code, but I'd love to hear what the stackoverflow community has to say about the matter.

    Read the article

  • Learn Actionscript 3.0+Flash Vs. C#

    - by user335932
    I have a background in python and I'm looking for a new language. I am almost only intrested in making games. I have come to 2 languages. C# and Action Script. C# because Microsoft allows you to make Indie XBLA games programmed in C# ONLY. Action Script so I can make flash games for new grounds and ect. What do you think is better to learn in the long run?

    Read the article

  • URIs vs Hidden Forms

    - by NateDogg
    I'm working in the Codeigniter framework, and want to send requests to my controller/model that have several variables involved. Is there a difference between passing those variables via a hidden form (i.e. using "post") as opposed to passing them through URIs (e.g. 'travel/$month/$day/')? What about security concerns? e.g. URIs: http://www.example.com/travel/$month/$day/ Hidden Form: form_hidden('month',$month); form_hidden('day',$day);

    Read the article

  • help with installing clamav for remote apache server vs WAMP localserver

    - by Scarface
    Hey guys, I have never installed an extension before so I am not really sure where to start. I want to do 2 things: install clamav virus scanner http://sourceforge.net/projects/php-clamav/ on my remote server which is a linux apache setup using cpanel as a gui. install clamav on my localserver on my home computer which operates on windows and has WAMP server installed for server-side activity What is the difference between the two installation methods? What is required for each installation? If anyone can point me in the right direction in any way, I would really appreciate it. Even if it is just an article or tutorial on how to get started.

    Read the article

  • Android -- Object Creation/Memory Allocation vs. Performance

    - by borg17of20
    Hello all, This is probably an easy one. I have about 20 TextViews/ImageViews in my current project that I access like this: ((TextView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Text)).setText(""); //or ((ImageView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Right)).setVisibility(View.INVISIBLE); My question is this, am I better off, from a performance standpoint, just assigning these object variables? Further, am I losing some performance to the constant "search" process that goes on as a part of the findViewById(...) method? (i.e. Does findsViewById(...) use some sort of hashtable/hashmap for look-ups or does it implement an iterative search over the view hierarchy?) At present, my program never uses more than 2.5MB of RAM, so will assigning 20 or so more object variables drastically affect this? I don't think so, but I figured I'd ask. Thanks.

    Read the article

  • Debug vs Trace in C#

    - by koumides
    All, As I understand statements like Debug.WriteLine() will not stay in the code in the Release build. On the other hand Trace.WriteLine() will stay in the code in the Release build. What is controling this behaviour? Does the C# compiler ignores everything from the System.Diagnostics.Debug class when the DEBUG is defined? I am just trying to understand the internals of C# and just curious. Thanks, MK

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • Confused about "override" vs. "new" in C#

    - by iTayb
    I'm having the following classes: class Base { public virtual void Print() { Console.WriteLine("Base"); } } class Der1 : Base { public new virtual void Print() { Console.WriteLine("Der1"); } } class Der2 : Der1 { public override void Print() { Console.WriteLine("Der2"); } } This is my main method: Base b = new Der2(); Der1 d1 = new Der2(); Der2 d2 = new Der2(); b.Print(); d1.Print(); d2.Print(); The output is Base, Der2, Der2. As far as I know, Override won't let previous method to run, even if the pointer is pointing to them. So the first line should output Der2 as well. However Base came out. How is it possible? How the override didn't work there?

    Read the article

  • Use of properties vs backing-field inside owner class

    - by whatispunk
    I love auto-implemented properties in C# but lately there's been this elephant standing in my cubicle and I don't know what to do with him. If I use auto-implemented properties (hereafter "aip") then I no longer have a private backing field to use internally. This is fine because the aip has no side-effects. But what if later on I need to add some extra processing in the get or set? Now I need to create a backing-field so I can expand my getters and setters. This is fine for external code using the class, because they won't notice the difference. But now all of the internal references to the aip are going to invoke these side-effects when they access the property. Now all internal access to the once aip must be refactored to use the backing-field. So my question is, what do most of you do? Do you use auto-implemented properties or do you prefer to always use a backing-field? What do you think about properties with side-effects?

    Read the article

  • typeof === "undefined" vs. != null

    - by Thor Thurn
    I often see JavaScript code which checks for undefined parameters etc. this way: if (typeof input !== "undefined") { // do stuff } This seems kind of wasteful, since it involves both a type lookup and a string comparison, not to mention its verbosity. It's needed because 'undefined' could be renamed, though. My question is: How is that code any better than this approach: if (input != null) { // do stuff } As far as I know, you can't redefine null, so it's not going to break unexpectedly. And, because of the type-coercion of the != operator, this checks for both undefined and null... which is often exactly what you want (e.g. for optional function parameters). Yet this form does not seem widespread, and it even causes JSLint to yell at you for using the evil != operator. Why is this considered bad style?

    Read the article

  • Warning vs. error

    - by Samuel
    I had an annoying issue, getting a "Possible loss of precision" error when compiling my Java program on BlueJ (But from what i read this isn't connected to a specific IDE). I was surprised by the fact that the compiler told me there is a possible loss of precision and wouldnt let me compile/run the program. Why is this an error and not a warning saying you might loose precision here, if you don't want that change your code? The program runs just fine when i drop the float values, it wouldn't matter since there is no point (e.g [143.08, 475.015]) on my screen. On the other hand when i loop through an ArrayList and in this loop i have an if clause removing elements from the ArrayList it runs fine, just throws an error and doesn't display the ArrayList [used for drawing circles] for a fraction of a second. This appears to me as a severe error but doesn't cause (hardly) any troubles, while i wouldn't want to have such a thing in my code at all. What's the boundary?

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • Android dev time vs iPhone dev time

    - by Daniel Benedykt
    Hi IF someone has to develop the same application for Android and iPhone, is it more difficult to develop in one platform than on the other? Does it take more time? Lets think about the average app. Lists, text , buttons, fetch information from the internet. Thanks

    Read the article

  • iphone float vs integer rounding?

    - by Rob
    Okay, from what I understand, an integer that is a fraction will be rounded one way or the other so that if a formula comes up with say 5/6 - it will automatically round it to 1. I have a calculation: xyz = ((1300 - [abc intValue])/6) + 100; xyz is defined as an NSInteger, abc is an NSString that is chosen via a UIPicker. I want the calculation (1300 - [abc intValue]) to add 1 to 100 for each 6 units below 1300. For example, 1255 should result in xyz having a value of 100 and 1254 should result in a value of 101. Now, I understand that my formula above is wrong because of the rounding principles, but I am getting some CRAZY results from the program itself. When I punched in 1259 - I got 106. When I punched in 1255 - I got 107. Why would it behave that way?

    Read the article

  • VS 2008 and F# Feb CTP - Can't debug

    - by Steve
    I have downloaded the VS2008 integrated shell, and the F# Feb CTP and I have the F# environment working just fine. The problem comes when I try to debug. Nothing happens at all. The output window says ------ Build started: Project: ConsoleApplication1, Configuration: Debug Any CPU ------ ========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ========== and none of my breakpoints are hit. My "program" is as simple as can be #light open System printfn "Hello World" Console.ReadKey(true) with breakpoints on the printfn and Console lines. The things I've read seem to suggest that debugging would work with this setup, and there is a debugger folder under common7/packages with files in it. Thanks for any help. EDIT: I'm on Win7 64 bit

    Read the article

  • Ruby proc vs lambda in initialize()

    - by Jimmy Chu
    I found out this morning that proc.new works in a class initialize method, but not lambda. Concretely, I mean: class TestClass attr_reader :proc, :lambda def initialize @proc = Proc.new {puts "Hello from Proc"} @lambda = lambda {puts "Hello from lambda"} end end c = TestClass.new c.proc.call c.lambda.call In the above case, the result will be: Hello from Proc test.rb:14:in `<main>': undefined method `call' for nil:NilClass (NoMethodError) Why is that? Thanks!

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >