Search Results

Search found 54446 results on 2178 pages for 'struct vs class'.

Page 107/2178 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • if ('constant' == $variable) vs. if ($variable == 'constant')

    - by Tom Auger
    Lately, I've been working a lot in PHP and specifically within the WordPress framework. I'm noticing a lot of code in the form of: if ( 1 == $options['postlink'] ) Where I would have expected to see: if ( $options['postlink'] == 1 ) Is this a convention found in certain languages / frameworks? Is there any reason the former approach is preferable to the latter (from a processing perspective, or a parsing perspective or even a human perspective?) Or is it merely a matter of taste? I have always thought it better when performing a test, that the variable item being tested against some constant is on the left. It seems to map better to the way we would ask the question in natural language: "if the cake is chocolate" rather than "if chocolate is the cake".

    Read the article

  • What is Happening vs. What is Interesting

    - by Geertjan
    Devoxx 2011 was yet another confirmation that all development everywhere is either on the web or on mobile phones. Whether you looked at the conference schedule or attended sessions or talked to speakers at any point at all, it was very clear that no development whatsoever is done anymore on the desktop. In fact, that's something Tim Bray himself told me to my face at the speakers dinner. No new developments of any kind are happening on the desktop. Everyone who is currently on the desktop is working overtime to move all of their applications to the web. They're probably also creating a small subset of their application on an Android tablet, with an even smaller subset on their Android phone. Then you scratch that monolithic surface and find some interesting results. Without naming any names, I asked one of these prominent "ah, forget about the desktop" people at the Devoxx speakers dinner (and I have a witness): "Yes, the desktop is dead, but what about air traffic control, stock trading, oil analysis, risk management applications? In fact, what about any back office application that needs to be usable across all operating systems? Here there is no concern whatsoever with 100% accessibility which is, after all, the only thing that the web has over the desktop, (except when there's a network failure, of course, or when you find yourself in the 3/4 of the world where there's bandwidth problems)? There are 1000's of hidden applications out there that have processing requirements, security requirements, and the requirement that they'll be available even when the network is down or even completely unavailable. Isn't that a valid use case and aren't there 1000's of applications that fall into this so-called niche category? Are you not, in fact, confusing consumer applications, which are increasingly web-based and mobile-based, with high-end corporate applications, which typically need to do massive processing, of one kind or another, for which the web and mobile worlds are completely unsuited?" And you will not believe what the reply to the above question was. (Again, I have a witness to this discussion.) But here it is: "Yes. But those applications are not interesting. I do not want to spend any of my time or work in any way on those applications. They are boring." I'm sad to say that the leaders of the software development community, including those in the Java world, either share the above opinion or are led by it. Because they find something that is not new to be boring, they move on to what is interesting and start talking like the supposedly-boring developments don't even exist. (Kind of like a rapper pretending classical music doesn't exist.) Time and time again I find myself giving Java desktop development courses (at companies, i.e., not hobbyists, or students, but companies, i.e., the places where dollars are earned), where developers say to me: "The course you're giving about creating cross-platform, loosely coupled, and highly cohesive applications is really useful to us. Why do we never find information about this topic at conferences? Why can we never attend a session at a conference where the story about pluggable cross-platform Java is told? Why do we get the impression that we are uncool because we're not on the web and because we're not on a mobile phone, while the reason for that is because we're creating $1000,000 simulation software which has nothing to gain from being on the web or on the mobile phone?" And then I say: "Because nobody knows you exist. Because you're not submitting abstracts to conferences about your very interesting use cases. And because conferences tend to focus on what is new, which tends to be web related (especially HTML 5) or mobile related (especially Android). Because you're not taking the responsibility on yourself to tell the real stories about the real applications being developed all the time and every day. Because you yourself think your work is boring, while in fact it is fascinating. Because desktop developers are working from 9 to 5 on the desktop, in secure environments, such as banks and defense, where you can't spend time, nor have the interest in, blogging your latest tip or trick, as opposed to web developers, who tend to spend a lot of time on the web anyway and are therefore much more inclined to create buzz about the kind of work they're doing." So, next time you look at a conference program and wonder why there's no stories about large desktop development projects in the program, here's the short answer: "No one is going to put those items on the program until you start submitting those kinds of sessions. And until you start blogging. Until you start creating the buzz that the web developers have been creating around their work for the past 10 years or so. And, yes, indeed, programmers get the conference they deserve." And what about Tim Bray? Ask yourself, as Google's lead web technology evangelist, how many desktop developers do you think he talks to and, more generally, what his frame of reference is and what, clearly, he considers to be most interesting.

    Read the article

  • Speaking at the VS 2010 Launch at TechEd India this week

    Ill be speaking at TechEd India and the Visual Studio 2010 Launch in Bangalore, India this week. Ill be doing three sessions: Tuesday 2:30- Building RESTful Applications with the Open Data Protocol Wednesday 12:30-Building Applications with Silverlight 4.0 and WCF RIA Services Wednesday 2:30-Exploring the Silverlight 4.0 Business Features In addition, Team Telerik will be staffing a booth with Tee-shirts (hopefully if they get out of customs on time!) and live demos of our products and our brand new product to be announced today! See you at my sessions or at the booth! Technorati Tags: Telerik,TechEd Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Vector vs Scalar velocity?

    - by Serguei Fedorov
    I am revamping an engine I have been working on and off on for the last few weeks to use a directional vector to dictate direction; this way I can dictate the displacement based on a direction. However, the issue I am trying to overcome is the following problem; the speed towards X and speed towards Y are unrelated to one another. If gravity pulls the object down by an increasing velocity my velocity towards the X should not change. This is very easy to implement if my speed is broken into a Vector datatype, Vector.X dictates one direction Vector.Y dictates the other (assuming we are not concerned about the Z axis). However, this defeats the purpose of the directional vector because: SpeedX = 10 SpeedY = 15 [1, 1] normalized = ~[0.7, 0.7] [0.7, 0.7] * [10, 15] = [7, 10.5] As you can see my direction is now "scaled" to my speed which is no longer the direction that I want to be moving in. I am very new to vector math and this is a learning project for me. I looked around a little bit on the internet but I still want to figure out things on my own (not just look at an example and copy off it). Is there way around this? Using a directional vector is extremely useful but I am a little bit stumped at this problem. I am sorry if my mathematical understanding maybe completely wrong.

    Read the article

  • Python Multiprocessing with Queue vs ZeroMQ IPC

    - by Imraan
    I am busy writing a Python application using ZeroMQ and implementing a variation of the Majordomo pattern as described in the ZGuide. I have a broker as an intermediary between a set of workers and clients. I want to do some extensive logging for every request that comes in, but I do not want the broker to waste time doing that. The broker should pass that logging request to something else. I have thought of two ways :- Create workers that are only for logging and use the ZeroMQ IPC transport Use Multiprocessing with a Queue I am not sure which one is better or faster for that matter. The first option does allow me to use the current worker base classes that I already use for normal workers, but the second option seems quicker to implement. I would like some advice or comments on the above or possibly a different solution.

    Read the article

  • Lead/Manager vs Individual contributor which is better?

    - by User11091981
    Currently I am working in a company as a Manager (software dev). But I only have 6.8 yrs experience. I joined this company as a software engineer and got promoted to SSE, Lead and Manager. Some of my team members are having better experience than me, and I feel like I need to have more exposure/experience to take these roles. I feel like it is better to be an individual contributor learn many things for another couple of years and become a Principal Software Engineer, rather than involving in Management. Options I have: 1. Ask my current employer to make me an individual contributor? 2. Find a new company and join as an SSE to start over? 3. Find a new company for a lead position? Please advice.

    Read the article

  • International TLD's vs. duplicate content

    - by Litso
    Hey all, I currently work at a pretty big website that has visitors from around the globe. My job is to help out on the SEO, and one thing we've been discussing lately is the use of international TLD's. The ones we use range between: (partly) translated websites like .es and .de that serve most of the content in the country's language non-translated (english) websites for non-english languages (due to a lack of translations) like .ro and .cz english websites for english speaking countries with localized TLD's (.co.nz, .co.uk) On one hand I really have the feeling this is causing a lot of duplicate content, especially for the last two categories of TLD's. On the other hand though it seems a lot like country-specific TLD's tend to score a lot better in that country's Google. Would it be advisable to keep on using these domains, or should we canonicalize them all to the .com version?

    Read the article

  • IOS Variable vs Property

    - by William Smith
    Just started diving into Objective-C and IOS development and was wondering when and the correct location I should be declaring variables/properties. The main piece of code i need explaining is below: Why and when should i be declaring variables inside the interface statement and why do they have the same variable with _ and then the same one as a property. And then in the implementation they do @synthesize tableView = _tableView (I understand what synthesize does) Thanks :-) @interface ViewController : UIViewController <UITableViewDataSource, UITableViewDelegate> { UITableView *_tableView; UIActivityIndicatorView *_activityIndicatorView; NSArray *_movies; } @property (nonatomic, retain) UITableView *tableView; @property (nonatomic, retain) UIActivityIndicatorView *activityIndicatorView; @property (nonatomic, retain) NSArray *movies;

    Read the article

  • Gnome- vs Unity-panel (applet) compatibility?

    - by user5676
    I just love the indicator-applet and other parts of the Ayatana-project and think Ubuntu has done an awesome job there. And as the question about applet compatibility seem to be answered as a 'no' I'd like to take the question to the next level - the 'why' and 'why not'. How come these Ayatana-applets today work in gnome-panel but gnome applets won't work in the Unity panel? And - as it's connected - why not make them compatible? Isn't it all about usability?

    Read the article

  • use jQuery to get 'true size' of image without removing the class

    - by jon3laze
    I am using Jcrop on an image that is resized with css for uniformity. JS <script type="text/javascript"> $(window).load(function() { //invoke Jcrop API and set options var api = $.Jcrop('#image', { onSelect: storeCoords, trueSize: [w, h] }); api.disable(); //disable until ready to use //enable the Jcrop on crop button click $('#crop').click(function() { api.enable(); }); }); function storeCoords(c) { $('#X').val(c.x); $('#Y').val(c.y); $('#W').val(c.w); $('#H').val(c.h); }; </script> HTML <body> <img src="/path/to/image.jpg" id="image" class="img_class" alt="" /> <br /> <span id="crop" class="button">Crop Photo</span> <span id="#X" class="hidden"></span> <span id="#Y" class="hidden"></span> <span id="#W" class="hidden"></span> <span id="#H" class="hidden"></span> </body> CSS body { font-size: 13px; width: 500px; height: 500px; } .image { width: 200px; height: 300px; } .hidden { display: none; } I need to set the h and w variables to the size of the actual image. I tried using the .clone() manipulator to make a copy of the image and then remove the class from the clone to get the sizing but it sets the variables to zeros. var pic = $('#image').clone(); pic.removeClass('image'); var h = pic.height(); var w = pic.width(); It works if I append the image to an element in the page, but these are larger images and I would prefer not to be loading them as hidden images if there is a better way to do this. Also removing the class, setting the variables, and then re-adding the class was producing sporadic results. I was hoping for something along the lines of: $('#image').removeClass('image', function() { h = $(this).height(); w = $(this).width(); }).addClass('image'); But the removeClass function doesn't work like that :P

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • Instantiate proper class based on some input

    - by Adam Backstrom
    I'm attempting to understand how "switch as a code smell" applies when the proper code path is determined by some observable piece of data. My Webapp object sets an internal "host" object based on the hostname of the current request. Each Host subclass corresponds to one possible hostname and application configuration: WwwHost, ApiHost, etc. What is an OOP way for a host subclass to accept responsibility for a specific hostname, and for Webapp to get an instance of the appropriate subclass? Currently, the hostname check and Host instantiation exists within the Webapp object. I could move the test into a static method within the Host subclasses, but I would still need to explicitly list those subclasses in Webapp unless I restructure further. It seems like any solution will require new subclasses to be added to some centralized list.

    Read the article

  • Rails vs. Drupal [closed]

    - by joker13
    I was querying indeed.com/salary to investigate general market trends. When comparing ruby on rails with drupal, you would observe a substantial difference between these two. I'm not sure if the data on indeed.com is reliable or not but I'd appreciate your comments if you have ever tried both rails and drupal. Actually I am a .net developer considering an alternative to my asp.net mvc skills and I like to learn some non-microsoft web programming skills as well.

    Read the article

  • Google Analytics HTTP vs HTTPS

    - by Pelangi
    I want to use Google Analytics on a website that uses both HTTP and HTTPS that works as explained below: Secure pages accessed through https://mydomain.com/secure/* are always on HTTPS. Any access to these pages through HTTP will be redirected to HTTPS. Any other pages will be accessible through both HTTP and HTTPS I have a Google Analytics profile with URL using HTTPS. Will I cover all traffic? Do I need to create another profile using HTTP and how should I apply the other profile?

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • Graphic card for parallel programming vs traditional methods

    - by Sambatyon
    With a simple search in amazon one can see that the modern approach for parallel programming is to use your graphic card. However I am still a little bit skeptical about it. My last computer has an 8 core CPU which I need is enough for basic all my parallel needs, if I need more I will probably use MPI through a network using my old machines. All in all, Why and/or when should I use CUDA or another method which uses my graphic card instead of traditional methods like pthreads, java threads, boost threads or the new C++ 11 threads? What about using processes?

    Read the article

  • '/'var/www/' vs '/home/$USER/public_html'

    - by OrganizedFellow
    I recently started using Ubuntu as a LAMP server. I've come across plenty of tutorials that say to place the files at '/var/www/' and I've also seen others that put them in '/home/$USER/public_html/'. During my testing and figuring stuff out, I was successfully able to view a test site URL from each location. Is one better than the other? I thought that maybe it was just preference. But the more I think about it, the more I want to keep all my work in my Home folder.

    Read the article

  • abstract class in C++

    - by Alexander
    I have a derived derived class from an abstract class. The code is below. I have a FishTank class which is derived from an Aquarium and Aquarium is derived from item. My question is that should I put the definition of virtual int minWidth() const = 0; in aquarium again or is the code below sufficient? class Item{ public: virtual int minWidth() const = 0; }; class Aquarium{ public: virtual int calWidth() = 0; // Pure virtual function. }; class FishTank : public Aquarium{ public: FishTank(int base1, int base2, int height); ~FishTank(); int calWidth(); int minWidth(); };

    Read the article

  • Structure vs. programming

    - by ChristopherW
    Is it bad that I often find myself spending more time on program structure than actually writing code inside methods? Is this common? I feel I spend more time laying the foundation than actually building the house (metaphorically). While I understand that without a good foundation the house will cave in, but does it legitimately need to take half of the project to finalize code structure? I understand design patterns, and I know where to go if I need help on choosing one, but often I find myself doubting my own choices.

    Read the article

  • XNA ModelMesh.Draw vs GraphicsDevice.DrawIndexedPrimitives

    - by cubrman
    I am using XNA 4.0 and I wonder if drawing models with multiple meshes is better by filling the vertex and index buffers first and calling GraphicsDevice.DrawIndexedPrimitives() or by simply using good ol' foreach(...) {ModelMesh.Draw()}. Is it possible to add data to vertex/index buffers at all in order to pack all the models on the scene in them and then call Draw only once per frame? I would appreciate a link to an in-depth explanation. Thanks.

    Read the article

  • Referencing external javascript vs. hosting my own copy

    - by Mr. Jefferson
    Say I have a web app that uses jQuery. Is it better practice to host the necessary javascript files on my own servers along with my website files, or to reference them on jQuery's CDN (example: http://code.jquery.com/jquery-1.7.1.min.js)? I can see pros for both sides: If it's on my servers, that's one less external dependency; if jQuery went down or changed their hosting structure or something like that, then my app breaks. But I feel like that won't happen often; there must be lots of small-time sites doing this, and the jQuery team will want to avoid breaking them. If it's on my servers, that's one less external reference that someone could call a security issue If it's referenced externally, then I don't have to worry about the bandwidth to serve the files (though I know it's not that much). If it's referenced externally and I'm deploying this web site to lots of servers that need to have their own copies of all the files, then it's one less file I have to remember to copy/update.

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

  • Code review vs pair programming

    - by mericano1
    I was wondering what is the general idea about code review and pair programming. I do have my own opinion but I'd like to hear from somebody else as well. Here are a few questions, please give me your opinion even on some of the point First of all are you aware of way to measure the effectiveness of this practices? Do you think that if you pair program, code reviews are not necessary or it's still good to have them both? Do you think anybody can do code review or maybe is better done by seniors only? In terms of productivity do you think it suffers from pairing all the times or you will eventually get in back in the long run? Thanks!

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >