Search Results

Search found 17731 results on 710 pages for 'programming practices'.

Page 37/710 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • APress Deal of the Day 25/Jul/2013 - Pro HTML5 Programming

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/07/25/apress-deal-of-the-day-25jul2013---pro-html5-programming.aspxToday's $10 deal of the day from APress at http://www.apress.com/9781430238645 is Pro HTML5 Programming"An update of Pro HTML5 Programming, including major corrections for WebSockets functionality, along with new chapters covering the drag and drop API, as well as SVG."

    Read the article

  • How to Write Manageable Code With Functional Programming?

    - by dade
    I just started with Functional Programming(Node.Js) and from the look of things it looks as if the code am writing would grow to be one hell of a code base to manage, when compared to Programming languages that have a sort of Object Oriented Paradigm. With OOP I am familair with practices that would ensure your code is easily managed and extensible. But am nore sure of similar convention with Functional Programming.

    Read the article

  • Which programming language to go for in order to learn Object Oriented Programming? [closed]

    - by Maxood
    If someone has a good grasp in logic and procedural programming then which language to start with for learning OOP. Also why C++ is mostly taught at schools whereas Java is a pure Object Oriented language(also language for making android apps)? Why not Objective C is being taught for making apps on the iPhone? I am seeking for the right answer keeping in view of these 2 factors: Background of the learner in procedural programming Economic or job market market demand of programming languages Here is a list of 10 programming languages, i would like to seek justifications for: Java C++ Objective C Scala C# PHP Python Java Javascript (not sure if it is a fully featured OOP language) 10.Ruby (not sure if it is a fully featured OOP language)

    Read the article

  • APress deal of the day 13/Sep/2012 - Beginning C# Object-Oriented Programming

    - by TATWORTH
    Today's $10 deal of the day from APress at http://www.apress.com/9781430235309 is Beginning C# Object-Oriented Programming"Beginning C# Object-Oriented Programming brings you into the modern world of development, as you master the fundamentals of programming with C# and learn to develop efficient, reusable, elegant code through the object-oriented programming (OOP) methodology."  Here is a summary of my earlier review:This is a good book to learn C# by doing something practical. The book provides an excellent series of hands-on activities.So should you get a copy for your trainee C# programmers? Yes!Do I recommend it for people learning C# 2010 on their own? Yes!Those of you who have written to me for training in C# (assuming the messages were not from BOTS!), should you buy this book - YES!

    Read the article

  • Deepest sense of programming [closed]

    - by xralf
    I suffer on depression for a few months. Programming was one of my big passions (as a hobby). I had a motivation to achieve my goals (projects), to read books and articles about it to have interest in algorithms and data structures, compilers etc. Then, my mind started to think that it has no sense, that the result is useless. I realized, that I loved programming because of an illusion that it has deep sense, that I love playing with code every day as nothing else with feeling that it leads somewhere. Could I rationalize that it has sense to work on some programming project? That there is a deep sense to do it and enjoy this activity? I have no idea what else should I do in the free time, the mornings without motivation are very depressing. It was nice time when I had an illusion that programming is enjoyable. Could you help me to figure out the deepest sense of programming in this world? Why to love it again? What everything could be achieved and realized? (things like higher salary and ego are not what I'm looking for)

    Read the article

  • Free ebook: Programming Windows 8 Apps with HTML, CSS, and JavaScript

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/11/03/free-ebook-programming-windows-8-apps-with-html-css-and.aspxAt http://blogs.msdn.com/b/microsoft_press/archive/2012/10/29/free-ebook-programming-windows-8-apps-with-html-css-and-javascript.aspx, there is a free E-Book: Programming Windows 8 Apps with HTML, CSS, and JavaScript. "This free E-book provides comprehensive coverage of the platform for Windows Store apps."

    Read the article

  • game programming career, vc++ reference and future of it [closed]

    - by Pappu Bacha
    1) I have quite a lot of interest in game programming and I (to my thought) am quite good at programming skills, I have developed some console based animations and text based animation games (like copter-it, snake, and a music visualization), should I invest in game programming? I have 2 years at college left. 2) If I am to pursue a career in game programming, and I select to go only with c++ and DirectX, is it enough? is assembly language necessary? 3) is Visual C++ or MFC dead? should I invest in it or not? 4) I am unable to find any reference book for Visual C++ 2008 or later (just like C++ the complete reference book) I need a book that covers the basic fundamentals and covers the most of the libraries etc.

    Read the article

  • Derek Brink shares "Worst Practices in IT Security"

    - by Darin Pendergraft
    Derek Brink is Vice President and Research Fellow in IT Security for the Aberdeen Group.  He has established himself as an IT Security Expert having a long and impressive career with companies and organizations ranging from RSA, Sun, HP, the PKI Forum and the Central Intelligence Agency.  So shouldn't he be talking about "Best Practices in IT Security?" In his latest blog he talks about the thought processes that drive the wrong behavior, and very cleverly shows how that incorrect thinking exposes weaknesses in our IT environments. Check out his latest blog post titled: "The Screwtape CISO: Memo #1 (silos, stovepipes and point solutions)"

    Read the article

  • GDC 2012: Best practices in developing a web game

    GDC 2012: Best practices in developing a web game (Pre-recorded GDC content) There's a new wave of console/pc/mobile game developers moving to the web looking to take advantage of the massive user base, along side of the powerful social graphs available there. The web as a platform is a very different technology stack than consoles / mobile, and as such, requires different development processes. This talk is targeted towards game developers who are looking to understand more about the development processes for web development including where to host your assets, proper techniques in caching to the persistant file store; dealing with sessions, storing user state, user login, game state storage, social graph integration, localization, audio, rendering, hardware detection and testing / distribution. If you're interested in developing a web game, you need to attend this talk! Speaker: Colt McAnlis From: GoogleDevelopers Views: 5149 131 ratings Time: 01:03:52 More in Science & Technology

    Read the article

  • What are the best practices for rapid prototyping using exclusively HTML/CSS/JS

    - by charlax
    I'm developing a prototype of a web application. I want to only use HTML, CSS and Javascript. I prefer to use my text editor and not having to learn (or pay, for that matter) a new tool like Axure. What would be, to your mind, the best practices? To me there are many qualities for a good prototype: Quickly developed Easy to improve Fair fidelity as regards UX (this disqualifies tools like Omnigraffle or PowerPoint that are more dedicated to wireframing) I trying to learn as quickly as possible, but I would like to know, based on your experience, on how you managed to be both quick and agile. Reference: http://www.boxesandarrows.com/view/prototyping-with

    Read the article

  • Best practices for logging user actions in production

    - by anthonypliu
    I was planning on logging a lot of different stuff in my production environment, things like when a user: Logs In, Logs Off Change Profile Edit Account settings Change password ... etc Is this a good practice to do on a production enviornment? Also what is a good way to log all this. I am currently using the following code block to log to: public void LogMessageToFile(string msg) { System.IO.StreamWriter sw = System.IO.File.AppendText( GetTempPath() + @"MyLogFile.txt"); try { string logLine = System.String.Format( "{0:G}: {1}.", System.DateTime.Now, msg); sw.WriteLine(logLine); } finally { sw.Close(); } } Will this be ok for production? My application is very new so im not expecting millions of users right away or anything, looking for the best practices to keeping track of actions on a website or if its even best practice to.

    Read the article

  • Best practices when loading images for improving page loading speed

    - by Naoise Golden
    I am working on optimizing a page's loading speed. Here are some analytics: Notice how the images, although only accounting for 65% of the total size (1.1MB), are by far the slowest loading assets: 96% of time. I'd like to know which are the recommended practices on optimizing loading speed, only taking images into account. Some of the techniques we are already applying: image compression images hosted on cookieless domain and CDN spriting everything that can be sprited http headers: keep alive and Expires to one year. Disclaimer: I have gone through the available documentation, I think by focusing on image loading optimization I am not creating a duplicate or a subjective question.

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Basic game architechture best practices in Cocos2D on iOS

    - by MrDatabase
    Consider the following simple game: 20 squares floating around an iPhone's screen. Tapping a square causes that square to disappear. What's the "best practices" way to set this up in Cocos2D? Here's my plan so far: One Objective-c GameState singleton class (maintains list of active squares) One CCScene (since there's no menus etc) One CCLayer (child node of the scene) Many CCSprite nodes (one for each square, all child nodes of the layer) Each sprite listens for a tap on itself. Receive tap = remove from GameState Since I'm relatively new to Cocos2D I'd like some feedback on this design. For example I'm unsure of the GameState singleton. Perhaps it's unnecessary.

    Read the article

  • Examples and Best Practices for Seeding Defects?

    - by MathAttack
    Defect Seeding seems to be one of the few ways a development organization can tell how thorough an independent testing group is. I'm a fan of using metrics to help counter overconfidence biases, and drive discussions around facts. With that said, I haven't seen Seeding Defects used in practice. Are there best practices above and beyond what McConnell explained? Are there public examples where this has been done? In the absence of the above, any thoughts on why it hasn't been done more? Thanks in advance!

    Read the article

  • Learning Library – Enterprise Manager Cloud Control 12c: Best Practices for Middleware Management

    - by JuergenKress
    This self-paced course teaches you best practices when using Oracle Enterprise Manager Cloud Control 12c for managing your WebLogic and SOA applications and infrastructure. It consists of interactive lectures, videos, review sessions, and optional demonstrations. This course covers Enterprise Manager Cloud Control 12c licensed with the WebLogic Server Management Pack Enterprise Edition and the SOA Management Pack Enterprise Edition. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: EM12c,Enterpries manager training,education,training,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Ubuntu Live USB: best practices for secure net traffic

    - by Och
    I want to to set up a live USB with Ubuntu, in the most secure way. So I want to have the persistent data on a second USB, something that its not that much problematic. How to configure a very safe Internet surfing (throughout a VPN?) Which are the best practices that could be implemented to have the Ubuntu live in a USB, the persistent data in other, and with the Internet access to a VPN (the Ubuntu privacy remix gives most of this, except the VPN config), Any ideas of how to combine the best of Ubuntu privacy remix, and Internet access to a VPN?

    Read the article

  • DBA Best Practices: A Blog Series

    - by Argenis
      Introduction After the success of the “Demystifying DBA Best Practices” Pre-Conference that my good friend Robert Davis, a.k.a. SQLSoldier [Blog|Twitter] and I delivered at multiple events, including the PASS Summit 2012, I have decided to blog about some of the topics discussed at the Pre-Con. My thanks go to Robert for agreeing to share this content with the larger SQL Server community. This will be a rather lengthy blog series - and as in the Pre-Con, I expect a lot of interaction and feedback. Make sure you throw in your two cents in the comments section of every blog post. First topic that I’ll be discussing in this blog series: The thing of utmost importance for any Database Administrator: the data. Let’s discuss the importance of backups and a solid restore strategy. Care to share your thoughts on this subject in the comments section below?

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • DBA Best Practices: A Blog Series

    - by Argenis
      Introduction After the success of the “Demystifying DBA Best Practices” Pre-Conference that my good friend Robert Davis, a.k.a. SQLSoldier [Blog|Twitter] and I delivered at multiple events, including the PASS Summit 2012, I have decided to blog about some of the topics discussed at the Pre-Con. My thanks go to Robert for agreeing to share this content with the larger SQL Server community. This will be a rather lengthy blog series - and as in the Pre-Con, I expect a lot of interaction and feedback. Make sure you throw in your two cents in the comments section of every blog post. First topic that I’ll be discussing in this blog series: The thing of utmost importance for any Database Administrator: the data. Let’s discuss the importance of backups and a solid restore strategy. Care to share your thoughts on this subject in the comments section below?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Best practices for upgrading user data when updating versions of software

    - by Javy
    In my code I check the current version of the software on launch and compare it to the version stored in the user's data file(s). If the version is newer, then I call different methods to update the old data to the newer data version, if necessary. I usually have to make a new method to convert the data with each update that changes user data in some way, and cannot remove the old ones in case there was someone who missed an update. So the app must be able to go through each method call and update their data until they get their data current. With larger data sets, this could be a problem. In addition, I recently had a brief discussion with another StackOverflow user this and he indicated he always appended a date stamp to the filename to manage data versions, although his reasoning as to why this was better than storing the version data in the file itself was unclear. Since I've rarely seen management of user data versions in books I've read, I'm curious what are the best practices for naming user data files and procedures for updating older data to newer versions.

    Read the article

  • Google BigQuery - Best Practices for Loading your Data and open Office Hours

    Google BigQuery - Best Practices for Loading your Data and open Office Hours Michael Manoochehri and Ryan Boyd from the DevRel team for cloud data services will be streaming to you live! They'll be discussing how to load your data into BigQuery and the various options available -- from commercial ETL tools to App Engine's Pipeline API and MapReduce frameworks, to simple UNIX command-line tools. They'll then open it up for a general office hours on ingestion and other topics. Please use the moderator link to ask your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >