Search Results

Search found 9026 results on 362 pages for 'vs extensibility'.

Page 101/362 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • MonoTouch - foreach vs for loops (performance)

    - by ifwdev
    Normally I'm well aware that a consideration like this is premature optimization. Right now I have some event handlers being attached inside a foreach loop. I am wondering if this style might be prone to leaks or inefficient memory use due to closures being created. Is there any validity to this thinking?

    Read the article

  • abstract class MouseAdapter vs. interface

    - by Stefano Borini
    I noted this (it's a java.awt.event class). public abstract class MouseAdapter implements MouseListener, MouseWheelListener, MouseMotionListener { .... } Then you are clearly forced to extend from this adapter public class MouseAdapterImpl extends MouseAdapter {} the class is abstract and implements no methods. Is this a strategy to combine different interfaces into a single "basically interface" ? I assume in java it's not possible to combine different interfaces into a single one without using this approach. In other words, it's not possible to do something like this in java public interface MouseAdapterIface extends MouseListener, MouseWheelListener, MouseMotionListener { } and then eventually public class MouseAdapterImpl implements MouseAdapterIface {} Is my understanding of the point correct ? what about C# ?

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • vb.net vs. framework

    - by Joe
    What reasons are there to migrate from vb.net specific language to .net framework language? Examples: VB.net ubound msgBox .Net Framework array.getUpperBound(0) messageBox

    Read the article

  • Rails :dependent => :destroy VS :dependent => :delete_all

    - by Sergey
    In rails guides it's described like this: "Objects will be in addition destroyed if they’re associated with :dependent = :destroy, and deleted if they’re associated with :dependent = :delete_all." Right, cool. But what's the difference between being destroyed and being deleted? I tried both and it seems to do the same thing.

    Read the article

  • WordPress 2.9.2 VS WordPress 3.0 Release Candidate

    - by metal-gear-solid
    I'm going to do a new install of Wordpress. new 3.0 version is coming. current stable version is "WordPress 2.9.2" and "WordPress 3.0 Release Candidate" the last release before final version 3. So for now i should setup 2.9.2 or 3.0 Release Candidate? will i have to replace all files to RC upon final release? What are cons to use Release Candidate version?

    Read the article

  • Using static variable in function vs passing variable from caller

    - by Patrick
    I have a function which spawns various types of threads, one of the thread types needs to be spawned every x seconds. I currently have it like this: bool isTime( Time t ) { return t >= now(); } void spawner() { while( 1 ) { Time t = now(); if( isTime( t ) )//is time is called in more than one place in the real function { launchthread() t = now() + offset; } } } but I'm thinking of changing it to: bool isTime() { static Time t = now(); if( t >= now() ) { t = now() + offset; return true; } return false; } void spawner() { if( isTime() ) launchthread(); } I think the second way is neater but I generally avoid statics in much the same way I avoid global data; anyone have any thoughts on the different styles?

    Read the article

  • Ajaxcontroltoolkit VS. jQuery

    - by Jonesy
    hi folks, I asked a question a few days ago about how to customise the calendar extender of the ajaxcontroltoolkit library and got a response saying I should ditch the control kit for jQuery. I have to say I've heard jQuery being mentioned quite a bit and more importantly I've seen it as a requirement for an increasing number of web development job vacancies. I do like the ajaxcontroltoolkit with its simplicity and integration with Visual Studio. Does anyone have an opinion on the two of these? I'd love to hear from developers with experience with both these ajax solutions. -- Jonesy

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • viewstack vs. tab navigator

    - by donpal
    I'm new to flex and was looking at some of the components that ship with flex. Can someone tell me the difference between viewstack and tab navigator. They seem to be somewhat similar. When do you use one or the other?

    Read the article

  • HTML5 <audio> Safari live broadcast vs not

    - by Peter Parente
    I'm attempting to embed an HTML5 audio element pointing to MP3 or OGG data served by a PHP file . When I view the page in Safari, the controls appear, but the UI says "Live Broadcast." When I click play, the audio starts as expected. Once it ends, however, I can't start it playing again by clicking play. Even using the JS API on the audio element and setting currentTime to 0 fails with an index error exception. I suspected the headers from the PHP script were the problem, particularly missing a content length. But that's not the case. The response headers include a proper Content- Length to indicate the audio has finite size. Furthermore, everything works as expected in Firefox 3.5+. I can click play on the audio element multiple times to hear the sound replay. If I remove the PHP script from the equation and serve up a static copy of the MP3 file, everything works fine in Safari. Does this mean Safari is treating audio src URLs with query parameters differently than URLs that don't have them? Anyone have any luck getting this to work? My simple example page is: <!DOCTYPE html> <html> <head></head> <body> <audio controls autobuffer> <source src="say.php?text=this%20is%20a%20test&format=.ogg" /> <source src="say.php?text=this%20is%20a%20test&format=.mp3" /> </audio> </body> </html> HTTP Headers from PHP script: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 15:39:34 GMT Server: Apache X-Powered-By: PHP/5.2.10 Content-Length: 8993 Keep-Alive: timeout=2, max=98 Connection: Keep-Alive Content-Type: audio/mpeg HTTP Headers from direct file access: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 20:06:59 GMT Server: Apache Last-Modified: Sun, 03 Jan 2010 03:20:02 GMT Etag: "a404b-c3f-47c3a14937c80" Accept-Ranges: bytes Content-Length: 8993 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: audio/mpeg I tried hard-coding the Accept-Ranges header into the script too, but no luck.

    Read the article

  • Condition checking vs. Exception handling

    - by Aidas Bendoraitis
    When is exception handling more preferable than condition checking? There are many situations where I can choose using one or the other. For example, this is a summing function which uses a custom exception: # module mylibrary class WrongSummand(Exception): pass def sum_(a, b): """ returns the sum of two summands of the same type """ if type(a) != type(b): raise WrongSummand("given arguments are not of the same type") return a + b # module application using mylibrary from mylibrary import sum_, WrongSummand try: print sum_("A", 5) except WrongSummand: print "wrong arguments" And this is the same function, which avoids using exceptions # module mylibrary def sum_(a, b): """ returns the sum of two summands if they are both of the same type """ if type(a) == type(b): return a + b # module application using mylibrary from mylibrary import sum_ c = sum_("A", 5) if c is not None: print c else: print "wrong arguments" I think that using conditions is always more readable and manageable. Or am I wrong? What are the proper cases for defining APIs which raise exceptions and why?

    Read the article

  • mod_wsgi daemon mode vs threaded fastcgi

    - by t0ster
    Can someone explain the difference between apache mod_wsgi in daemon mode and django fastcgi in threaded mode. They both use threads for concurrency I think. Supposing that I'm using nginx as front end to apache mod_wsgi. UPDATE: I'm comparing django built in fastcgi(./manage.py method=threaded maxchildren=15) and mod_wsgi in 'daemon' mode(WSGIDaemonProcess example threads=15). They both use threads and acquire GIL, am I right?

    Read the article

  • mySQL - One large query vs Ajax indivdual queries

    - by Mark
    Hi guys, I guess no one will have a definative answer to this but considered predictions would be appriciated. I am in the process of developing a mySQL database for a web application and my question is: Is it more efficient to make a single query that returns a single row using AJAX or To request 100 - 700 rows when the user will likely only ever use the results of two or three? Really I am asking what is heavier for the server 2-3 requests with one result or 1 request with 100 - 700 results? Thanks, Mark

    Read the article

  • HTML Submit button vs AJAX based Post (ASP.NET MVC)

    - by Graham
    I'm after some design advice. I'm working on an application with a fellow developer. I'm from the Webforms world and he's done a lot with jQuery and AJAX stuff. We're collaborating on a new ASP.MVC 1.0 app. He's done some pretty amazing stuff that I'm just getting my head around, and used some 3rd party tools etc. for datagrids etc. but... He rarely uses Submit buttons whereas I use them most of the time. He uses a button but then attaches Javascript to it that calls an MVC action which returns a JSON object. He then parses the object to update the datagrid. I'm not sure how he deals with server-side validation - I think he adds a message property to the JSON object. A sample scenario would be to "Save" a new record that then gets added to the gridview. The user doesn't see a postback as such, so he uses jQuery to disable the UI whilst the controller action is running. TBH, it looks pretty cool. However, the way I'd do it would be to use a Submit button to postback, let the ModelBinder populate a typed model class, parse that in my controller Action method, update the model (and apply any validation against the model), update it with the new record, then send it back to be rendered by the View. Unlike him, I don't return a JSON object, I let the View (and datagrid) bind to the new model data. Both solutions "work" but we're obviously taking the application down different paths so one of us has to re-work our code... and we don't mind whose has to be done. What I'd prefer though is that we adopt the "industry-standard" way of doing this. I'm unsure as to whether my WebForms background is influencing the fact that his way just "doesn't feel right", in that a "submit" is meant to submit data to the server. Any advice at all please - many thanks.

    Read the article

  • SelfReferenceProperty vs. ListProperty Google App Engine

    - by John
    Hi All, I am experimenting with the Google App Engine and have a question. For the sake of simplicity, let's say my app is modeling a computer network (a fairly large corporate network with 10,000 nodes). I am trying to model my Node class as follows: class Node(db.Model): name = db.StringProperty() neighbors = db.SelfReferenceProperty() Let's suppose, for a minute, that I cannot use a ListProperty(). Based on my experiments to date, I can assign only a single entity to 'neighbors' - and I cannot use the "virtual" collection (node_set) to access the list of Node neighbors. So... my questions are: Does SelfReferenceProperty limit you to a single entity that you can reference? If I instead use a ListProperty, I believe I am limited to 5,000 keys, which I need to exceed. Thoughts? Thanks, John

    Read the article

  • Auto-implemented getters and setters vs. public fields

    - by tclem
    I see a lot of example code for C# classes that does this: public class Point { public int x { get; set; } public int y { get; set; } } Or, in older code, the same with an explicit private backing value and without the new auto-implemented properties: public class Point { private int _x; private int _y; public int x { get { return _x; } set { _x = value; } } public int y { get { return _y; } set { _y = value; } } } My question is why. Is there any functional difference between doing the above and just making these members public fields, like below? public class Point { public int x; public int y; } To be clear, I understand the value of getters and setters when you need to do some translation of the underlying data. But in cases where you're just passing the values through, it seems needlessly verbose.

    Read the article

  • Handles Comparison: empty classes vs. undefined classes vs. void*

    - by Nawaz
    Microsoft's GDI+ defines many empty classes to be treated as handles internally. For example, (source GdiPlusGpStubs.h) //Approach 1 class GpGraphics {}; class GpBrush {}; class GpTexture : public GpBrush {}; class GpSolidFill : public GpBrush {}; class GpLineGradient : public GpBrush {}; class GpPathGradient : public GpBrush {}; class GpHatch : public GpBrush {}; class GpPen {}; class GpCustomLineCap {}; There are other two ways to define handles. They're, //Approach 2 class BOOK; //no need to define it! typedef BOOK *PBOOK; typedef PBOOK HBOOK; //handle to be used internally //Approach 3 typedef void* PVOID; typedef PVOID HBOOK; //handle to be used internally I just want to know the advantages and disadvantages of each of these approaches. One advantage with Microsoft's approach is that, they can define type-safe hierarchy of handles using empty classes, which (I think) is not possible with the other two approaches. What else? EDIT: One advantage with the second approach (i.e using incomplete classes) is that we can prevent clients from dereferencing the handles (that means, this approach appears to support encapsulation strongly, I suppose). The code would not even compile if one attempts to dereference handles. What else?

    Read the article

  • Javascript: Inline function vs predefined functions

    - by glaz666
    Can any body throw me some arguments for using inline functions against passing predefined function name to some handler. I.e. which is better: (function(){ setTimeout(function(){ /*some code here*/ }, 5); })(); versus (function(){ function invokeMe() { /*code*/ } setTimeout(invokeMe, 5); })(); Strange question, but we are almost fighting in the team about this

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

  • Using ZLib unit to compress files vs using ZipForge

    - by user193655
    There are many questions on zipping in Delphi, anyway this is not a duplicate. I am using ZipForge for zip/unzip capability in my application. Currently I use 2 features of ZipForge: 1) zip and unzip (!) 2) password protect the archives Now I am removing the password from all the archives so I need only to zip and unzip files. I zip them just for minimizing bandwith when uploading/downloading files from the server. So my idea is to process all files once for unzipping them (with password) and rezipping them without password. I have nothing against ZipForge, anyway it is an extra component, every time I upgrade to a newest Delphi version I have to wait for the new IDE support and moreover the more components the more problems during the installation. So since what I do is very simple I'd like to replace ZipForge with 2 simple functinos using the ZLib unit. I found (and tested) the functions here on Torry's. What do you think of using Zlib unit? Do you see any potential problem that I would not have with ZipForge? Can you comment on speed?

    Read the article

  • jquery .html() vs div creation

    - by Argiropoulos Stavros
    Hello Lets say i have a div <div id='myDiv'></div> If i do: $('#myDiv').html('<div id='mySecondDiv'></div>') is it the same as var mySecondDiv=$('<div></div>'); $('myDiv').add(mySecondDiv); I'm pretty sure i have syntax errors but you get the idea.Are the two methods same?

    Read the article

  • Unable to add CRM 2011's Organization service as Service Reference to VS project

    - by Scorpion
    I have problem accessing Organization Service when I try to add it as a Service Reference in Visual Studio. However, I can Access the Service in browser. I have tried to add OrganizationData service and there is no issue with that. An Error occurred while attempting to find service at 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Error Details There was an error downloading 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc/_vti_bin/ListData.svc/$metadata'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. If the service is defined in the current solution, try building the solution and adding the service reference again.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >