Search Results

Search found 54446 results on 2178 pages for 'struct vs class'.

Page 351/2178 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • JQuery: is it possible to instantiate a class on client and pass it to $.ajax to post?

    - by nisardotnet
    what i mean by that is: i have a class called Customer: public class Customer { private string _firstName; private string _lastName; public string FirstName { get { return _firstName; } set { _firstName = value; } } public string LastName { get { return _lastName; } set { _lastName = value; } } } how do i instantitate the class "Customer" on the client code and add the data and post it? (not sure if this is possible) here is my client code: var customer = { "firstName": escape($('#txtFirstName').val()), "lastName": escape($('#txtLastName').val()) }; var jsonText = JSON.stringify({ customer: customer }); $.ajax( { type: "POST", url: "VisitorWS.asmx/AddCustomer", data: jsonText , //data: JSON.stringify(params), contentType: "application/json; charset=utf-8", dataType: "json", ...........

    Read the article

  • Is there a better way to create an object-oriented class with jquery?

    - by Devon
    I use the jquery extend function to extend a class prototype. For example: MyWidget = function(name_var) { this.init(name_var); } $.extend(MyWidget.prototype, { // object variables widget_name: '', init: function(widget_name) { // do initialization here this.widget_name = widget_name; }, doSomething: function() { // an example object method alert('my name is '+this.widget_name); } }); // example of using the class built above var widget1 = new MyWidget('widget one'); widget1.doSomething(); Is there a better way to do this? Is there a cleaner way to create the class above with only one statement instead of two?

    Read the article

  • .com vs .me for personal and blogging sites. Which one is good regarding seo

    - by Sameer Manas
    I basically have a domain under my name with .com extension. I am planning to use it for my portfolio and also as a regular blog. Now considering SEO and ranking stuff, what is the best way to implement this. myname.com - Portfolio || myname.com/blog - Blog page (or) myname.com - Blog || myname.me - Portfolio i have absolutely no idea on how .tld's impact SEO and Ranking, so i seek the experts advice on this. Thanks in advance.

    Read the article

  • Not able to delete list which dynamically created with jquery.

    - by shin
    I have two html here. The first one is dynamically generated by php and second one is just html to test. I also have the following jquery. When I click a cross with class delete in the second ones(plain html), it works nicely. However when I click a cross in the first ones, it does not work. It redirect me to the home page with # at the end. I am hoping someone point out what I am doing wrong. Thank in advance. HTML First part (dyanmicall generated) <ul style="display: block;" id="message"> <li class="41"> <span class="user"><strong>shin</strong></span> <span class="msg"> delete this as well</span> <span class="date">2010-01-15 07:47:31</span> <a href="#" id="41" class="delete">x</a> <div class="clear"></div></li> <li class="40"> <span class="user"><strong>shin</strong></span> <span class="msg"> delete me as well</span> <span class="date">2010-01-14 16:01:44</span> <a href="#" id="40" class="delete">x</a> <div class="clear"></div></li> ... ...</ul> Second part which is plain html <ul id="another"> <li><a href="#">you can't delete me</a></li> <li><a href="#" class="delete">delete this</a></li> <li><a href="#" class="delete">delete this</a></li> </ul> Here is jquery $(".delete").click(function(event) { event.preventDefault(); loading.fadeIn(); var commentContainer = $(this).parent(); var id = $(this).attr("id"); // var string = 'id='+ id ; $.ajax({ type: "POST", url: "index.php/admin/messages/changestatus/"+id, // data: string, cache: false, success: function(){ commentContainer.slideUp('slow', function() {$(this).remove();}); loading.fadeOut(); } }); return false; }); I am using CodeIgniter by the way.

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • What's the best way to change the namespace of a highly referenced class?

    - by vanslly
    I am attempting to move a highly referenced class from one namespace to another. Simply moving the file into the new project which has a different root namespace results in over 1100 errors throughout my solution. Some references to the class involve fully qualified namescape referencing and others involve the importing of the namespace. I have tried using a refactoring tool (Refactor Pro) to rename the namespace, in the hope all references to the class would change, but this resulted in the aforementioned problem. Anyone have ideas of how to tackle this challenge without needing to drill into every file manually and changing the fully qualified namespace or importing the new one if it doesn't exist already? Thanks.

    Read the article

  • Language Agnositc Basic Programming Question

    - by Rachel
    This is very basic question from programming point of view but as I am in learning phase, I thought I would better ask this question rather than having an misunderstanding or narrow knowledge about the topic. So do excuse me if somehow I mess it up. Question: Let say I have class A,B,C and D now class A has some piece of code which I need to have in class B,C and D so I am extending class A in class B, class C, and class D Now how can I access the function of class A in other classes, do I need to create an object of class A and than access the function of class A or as am extending A in other classes than I can internally call the function using this parameter. If possible I would really appreciate if someone can explain this concept with code sample explaining how the logic flows.

    Read the article

  • How can I use my own connection class with a strongly typed dataset?

    - by Maslow
    I have designed a class with sqlClient.SqlCommand wrappers to implement such functionality as automatic retries on timeout, Async (thread safety), error logging, and some sql server functions like WhoAmI. I've used some strongly typed datasets mainly for display purposes only, but I'd like to have the same database functionality that I use with my class. Is there an interface I can implement or a way to hook my command/connection class into the dataset at design or runtime? Or would I need to somehow write a wrapper for the dataset to implement these types of functions? if this is the only option could it be made generic to wrap anything that inherits from dataset?

    Read the article

  • Changing property type in class that implements interface with object type property.

    - by used2could
    I know the title is a bit confusing but bare with me. (I'm up for suggestions on a new title lol) I'm writing a TemplateEngine that will allow me to use my own markup in text based files. I'm wanting to add controls as plugins as the application matures. Currently i've got a structure like the following: interface IControl string Id object Value class Label : IControl string Id string Value class Repeater : IControl string Id List<IControl> Value Now you'll see the strange part right away in the Repeater class with the Value property. I was hoping that having the Value type as object in the interface would allow me the flexibility to expand the controls as i go along. The compiler doesn't like this and for good reason i guess. Does anyone have any suggestions how to accomplish this? Note: Please don't go into suggesting things like use Spark View Engine for templating. There is a reason i'm creating extra work for myself.

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Today VS 2010 SP1 comes out, any news on the roadmap for Visual Studio 2012?

    - by Abel
    Today Visual Studio 2010 SP1 comes out as general availability release. This made me wondering about the upcoming release of Visual Studio 2012: What are Microsoft's plans for Visual Studio 2012? I heard they'll come with a new version every two years. Are there any open fora or discussions? When will a preview be publicly available? But most importantly: what are the new highlights, improvements in .NET and C#/F#/VB (and C++ of course, request from Stijn)?

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

  • How to design a class for managing file path ?

    - by remi bourgarel
    Hi All In my app, I generate some xml file for instance : "/xml/product/123.xml" where 123 is the product's id and 123.xml contains informations about this product. I also have "/xml/customer/123.xml" where 123.xml contains informations about the client ... 123 How can I manage these file paths : 1/ - I create the file path directly in the seralization method ? 2/ I create 2 static class : CustomerSerializationPathManager and ProductSerializationPathManager with 1 method : getPath(int customerID) and getPath(int productID) 3/ I create one static class : SerializationPathManager with 2 method : getCustomerPath(int customerID) and getProductPath(int productID) 4/ something else I'd prefer the solution 3 cause if I think there's only one reason to change this class : I change the root directory. So I'd like to have your thoughts about it... thx

    Read the article

  • Visual Studio 2012 Launch Winnipeg&ndash;Slides

    - by Dylan Smith
    The Winnipeg .Net User Group hosted a VS 2012 Launch Event at the Imax in Winnipeg on Thursday, Dec 6.  Doing presentations on the giant Imax screen is always fun, and I did the first 2 sessions on: End-To-End Application Lifecycle Management with TFS 2012 Improving Developer Productivity with Visual Studio 2012 Thanks to everybody that came out, and if anybody is interested my slide decks can be downloaded here: TFS 2012 Slides VS 2012 Slides Also the Virtual Machine that I used to do my demo’s can be downloaded from Brian Keller’s blog here: VS 2012 ALM Virtual Machine

    Read the article

  • Synchronous vs. asynchronous for publish subscribe communication between JavaScript objects

    - by natlee75
    I implemented the publish subscribe pattern in a JavaScript module to be used by entirely client-side oriented JavaScript objects. This module has nothing to do with client-server communications in any way, shape or form. My question is whether it's better for the publish method in such a module to be synchronous or asynchronous, and why. As a very simplified example let's say I'm building a custom UI for an HTML5 video player widget: One of my modules is the "video" module that contains the VIDEO element and handles the various features and events associated with that element. This would probably have a namespace something like "widgets.player.video." Another is the "controls" module that has the various buttons - play, pause, volume, scrub, fullscreen, etc. This might have a namespace along the lines of "widgets.player.controls." These two modules are children of a parent "player" module ("widgets.player" ??), and as such would have no inherent knowledge of each other when instantiated as children of the "player" object. The "controls" elements would obviously need to be able to effect some changes on the video (click "Play" and the video should play), and vice versa (video's "timeUpdate" event fires and the visual display of the current time in the controls should update). I could tightly couple these modules and pass references to each other, but I'd rather take a more loosely coupled approach by setting up a pubsub type module that both can subscribe to and publish from. SO (thanks for bearing with me) in this kind of a scenario is there an advantage one way or another for synchronous publication versus asynchronous publication? I've seen some solutions posted online that allow for either/or with a boolean flag whereas others automatically do it asynchronously. I haven't personally seen an implementation that just automatically goes with synchronous publication... is this because there's no advantage to it? I know that I can accomplish this with features provided by jQuery, but it seems that there may be too much overhead involved here. The publish subscribe pattern can be implemented with relatively lightweight code designed specifically for this particular purpose so I'd rather go with that then a more general purpose eventing system like jQuery's (which I'll use for more general eventing needs :-).

    Read the article

  • Upgrading visual studio with Crystal Reports

    - by jkrebsbach
    In the process up updating an app from Visual Studio 2003 to VS 2008.  It happens to have a couple dozen crystal reports that it executes regarly. Upgraded visual studio to 2008, and when attempting to generate the reports an exception was thrown. A significant portion of the rendering engine for Crystal Reports is not coming from Crystal, it's coming from Visual Studio and those methods and properties have changed over the years.  I needed to upgrade the report generating methods from the VS 2003 way of doing things to the VS 2008 way for the report to generate successfully. Not only that, but this means that while we were previously rendering with Crystal 9 in VS 2003, Visual Studio 2008 will render per Crystal 10, which treats things like column widths in Excel different (by default, at least) so now we have to go through all of our reports and compare outputs for Crystal just to upgrade the Visual Studio environment that I foolishly believed would  not be affected.

    Read the article

  • Any way to use a class extension method to support an interface method in C#?

    - by dudeNumber4
    Console app below compiles, but the interface cast fails at run time. Is there an easy way to make this work? namespace ConsoleApplication1 { class Monkey { public string Shock { get { return "Monkey has been shocked."; } } } static class MonkeyExtensionToSupportIWombat { public static string ShockTheMonkey( this Monkey m ) { return m.Shock; } } interface IWombat { string ShockTheMonkey(); } class Program { static void Main( string[] args ) { var monkey = new Monkey(); Console.WriteLine( "Shock the monkey without the interface: {0}", monkey.Shock ); IWombat wombat = monkey as IWombat; Console.WriteLine( "Shock the monkey with the interface: {0}", wombat.ShockTheMonkey() ); Console.ReadLine(); } } }

    Read the article

  • Pre game loading time vs. in game loading time

    - by Keeper
    I'm developing a game in which a random maze is included. There are some AI creatures, lurking the maze. And I want them to go in some path according to the mazes shape. Now there are two possibilities for me to implement that, the first way (which I used) is by calculating several wanted lurking paths once the maze is created. The second, is by calculating a path once needed to be calculated, when a creature starts lurking it. My main concern is loading times. If I calculate many paths at the creating of the maze, the pre loading time is a bit long, so I thought about calculating them when needed. At the moment the game is not 'heavy' so calculating paths in mid game is not noticeable, but I'm afraid it will once it will get more complicated. Any suggestions, comments, opinions, will be of help. Edit: As for now, let p be the number of pre-calculated paths, a creatures has the probability of 1/p to take a new path (which means a path calculation) instead of an existing one. A creature does not start its patrol until the path is fully calculated of course, so no need to worry about him getting killed in the process.

    Read the article

  • Anonymous function vs. separate named function for initialization in jquery

    - by Martin N.
    We just had some controversial discussion and I would like to see your opinions on the issue: Let's say we have some code that is used to initialize things when a page is loaded and it looks like this: function initStuff() { ...} ... $(document).ready(initStuff); The initStuff function is only called from the third line of the snippet. Never again. Now I would say: Usually people put this into an anonymous callback like that: $(document).ready(function() { //Body of initStuff }); because having the function in a dedicated location in the code is not really helping with readability, because with the call on ready() makes it obvious, that this code is initialization stuff. Would you agree or disagree with that decision? And why? Thank you very much for your opinion!

    Read the article

  • How to review the current state of open source vs. closed source graphics drivers?

    - by Bucic
    How to know whether it's worth it to replace open source drivers installed by default with proprietary ones. Are there any benchmarks? Major known issues summaries. I don't mean 'at the time of writing this post'. I mean an up-to-date status on how the drivers compare. This page https://help.ubuntu.com/community/BinaryDriverHowto/ certainly doesn't do much on the matter, nor it even mentions Intel. EDIT: I've just learned there is no Intel proprietary driver because they made their drivers open source http://askubuntu.com/a/17395/29347

    Read the article

  • Is there a difference between "self-plagiarizing" in programming vs doing so as a writer?

    - by makerofthings7
    I read this Gawker article about how a writer reused some of his older material for new assignments for different magazines. Is there any similar ethical (societal?) dilemma when doing the same thing in the realm of Programming? Does reusing a shared library you've accumulated over the years amount to self-plagarizm? What I'm getting at is that it seems that the creative world of software development isn't as stringent regarding self-plagarism as say journalism or blogging. In fact on one of my interviews at GS I was asked what kind of libraries I've developed over the years, implying that me getting the job would entail co-licensing helpful portions of code to that company. Are there any cases where although it's legal to self-plagarize, it would be frowned upon in the software world?

    Read the article

  • Keep a programming language backwards compatible vs. fixing its flaws

    - by Radu Murzea
    First, some context (stuff that most of you know anyway): Every popular programming language has a clear evolution, most of the time marked by its version: you have Java 5, 6, 7 etc., PHP 5.1, 5.2, 5.3 etc. Releasing a new version makes new APIs available, fixes bugs, adds new features, new frameworks etc. So all in all: it's good. But what about the language's (or platform's) problems? If and when there's something wrong in a language, developers either avoid it (if they can) or they learn to live with it. Now, the developers of those languages get a lot of feedback from the programmers that use them. So it kind of makes sense that, as time (and version numbers) goes by, the problems in those languages will slowly but surely go away. Well, not really. Why? Backwards compatibility, that's why. But why is this so? Read below for a more concrete situation. The best way I can explain my question is to use PHP as an example: PHP is loved thousands of people and hated by just as many thousands. All languages have flaws, but apparently PHP is special. Check out this blog post. It has a very long list of so called flaws in PHP. Now, I'm not a PHP developer (not yet), but I read through all of it and I'm sure that a big chunk of that list are indeed real issues. (Not all of it, since it's potentially subjective). Now, if I was one of the guys who actively develops PHP, I would surely want to fix those problems, one by one. However, if I do that, then code that relies on a particular behaviour of the language will break if it runs on the new version. Summing it up in 2 words: backwards compatibility. What I don't understand is: why should I keep PHP backwards compatible? If I release PHP version 8 with all those problems fixed, can't I just put a big warning on it saying: "Don't run old code on this version !"? There is a thing called deprecation. We had it for years and it works. In the context of PHP: look at how these days people actively discourage the use of the mysql_* functions (and instead recommend mysqli_* and PDO). Deprecation works. We can use it. We should use it. If it works for functions, why shouldn't it work for entire languages? Let's say I (the developer of PHP) do this: Launch a new version of PHP (let's say 8) with all of those flaws fixed New projects will start using that version, since it's much better, clearer, more secure etc. However, in order not to abandon older versions of PHP, I keep releasing updates to it, fixing security issues, bugs etc. This makes sense for reasons that I'm not listing here. It's common practice: look for example at how Oracle kept updating version 5.1.x of MySQL, even though it mostly focused on version 5.5.x. After about 3 or 4 years, I stop updating old versions of PHP and leave them to die. This is fine, since in those 3 or 4 years, most projects will have switched to PHP 8 anyway. My question is: Do all these steps make sense? Would it be so hard to do? If it can be done, then why isn't it done? Yes, the downside is that you break backwards compatibility. But isn't that a price worth paying ? As an upside, in 3 or 4 years you'll have a language that has 90 % of its problems fixed.... a language much more pleasant to work with. Its name will ensure its popularity. EDIT: OK, so I didn't expressed myself correctly when I said that in 3 or 4 years people will move to the hypothetical PHP 8. What I meant was: in 3 or 4 years, people will use PHP 8 if they start a new project.

    Read the article

  • Any ideas about how to make Programming Techniques Class more interesting.

    - by Eedoh
    Hello. I already found similar question here on SO, but almost all the answers were more philosophical, then practical. I'd like You to share some of Your PRACTICAL ideas about how to make my course more interesting. It doesn't matter how much effort it takes from me. I even thought about trying to motivate them to pick some topic in the beginning of the course and to work on it as some kind of real, small, startup project that they could maybe financially exploit once it's finished. But I'm afraid that most of them will not get the project to the end, and that it could be boring to them working on one thing all year long. Also I thought about involving them in Torcs, but I'm afraid most of them wouldn't be up to the task. Btw, Torcs is Car Racing Simulation, but there's an API for developers so they can develop their own AI for the driver, and then race their cars against the other programmer's AI's. I'm not asking here for problem examples, as I asked a separate question about that. I need ideas about making my lectures more interesting and fun.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >