Search Results

Search found 59205 results on 2369 pages for 'time capsule'.

Page 293/2369 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Webfarm and IIS configuration tips/tricks

    - by steve schofield
    I was recently talking with some good friends about tips for performance and what an IIS Administrator could do on the server side.  I also see this question from time to time in the forums @ http://forums.iis.net.    Of course, you should test individual settings in a controlled environment while performing load testing before just implementing on your production farm.  IIS Compression enabled (both static and dynamic if possible, set it to 9)  If you are running IIS 6, check this article out by Scott Forsyth. Run FRT for long running pages (Failed Request Tracing) Sql Connection pooling in code Look at using PAL with performance counters ( http://blogs.iis.net/ganekar/archive/2009/08/12/pal-performance-analyzer-with-iis.aspx )  Look at load testing using visual studio load testing tools Log parser finding long running pages.  Here is a couple examples Look at CPU, Memory and disk counters.  Make sure the server has enough resources. Same machineKey account across all same nodes Localize content vs. using UNC based content on a single server (My UNC tag with great posts) Content expiration ETAG’s the same across all web-farms Disable Scalable Networking Pack Use YSlow or Developer tools in Chrome to help measure the client experience improvements. Additionally, some basic counters in for measuring applications is: I would recommend checking out the Chapter 17 in IIS 7 Resource kit. it was one of the chapters I authored. :) Concurrent Connections,  Request Per / Sec, Request Queued.  I strongly suggest testing one change at a time to see how it helps improve your performance.  Hopefully this post provides a few options to review in your environment.   Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Java Spotlight Episode 150: James Gosling on Java

    - by Roger Brinkley
    Interview with James Gosling, father of Java and Java Champion, on the history of Java, his work at Liquid Robotics, Netbeans, the future of Java and what he sees as the next revolutionary trend in the computer industry. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes Feature Interview James Gosling received a BSc in Computer Science from the University of Calgary, Canada in 1977. He received a PhD in Computer Science from Carnegie-Mellon University in 1983. The title of his thesis was "The Algebraic Manipulation of Constraints". He spent many years as a VP & Fellow at Sun Microsystems. He has built satellite data acquisition systems, a multiprocessor version of Unix, several compilers, mail systems and window managers. He has also built a WYSIWYG text editor, a constraint based drawing editor and a text editor called `Emacs' for Unix systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original design of the Java programming language and implemented its original compiler and virtual machine. He has been a contributor to the Real-Time Specification for Java, and a researcher at Sun labs where his primary interest was software development tools.     He then was the Chief Technology Officer of Sun's Developer Products Group and the CTO of Sun's Client Software Group. He briefly worked for Oracle after the acquisition of Sun. After a year off, he spent some time at Google and is now the chief software architect at Liquid Robotics where he spends his time writing software for the Waveglider, an autonomous ocean-going robot.

    Read the article

  • Optimize Images Using the ASP.NET Sprite and Image Optimization Framework

    The HTML markup of a web page includes the page's textual content, semantic and styling information, and, typically, several references to external resources. External resources are content that is part of web page, but are separate from the web page's markup - things like images, style sheets, script files, Flash videos, and so on. When a browser requests a web page it starts by downloading its HTML. Next, it scans the downloaded HTML for external resources and starts downloading those. A page with many external resources usually takes longer to completely load than a page with fewer external resources because there is an overhead associated with downloading each external resource. For starters, each external resource requires the browser to make an HTTP request to retrieve the resource. What's more, browsers have a limit as to how many HTTP requests they will make in parallel. For these reasons, a common technique for improving a page's load time is to consolidate external resources in a way to reduce the number of HTTP requests that must be made by the browser to load the page in its entirety. This article examines the free and open-source ASP.NET Sprite and Image Optimization Framework, which is a project developed by Microsoft for improving a web page's load time by consolidating images into a sprite or by using inline, base-64 encoded images. In a nutshell, this framework makes it easy to implement practices that will improve the load time for a web page that displays several images. Read on to learn more! Read More >

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • Advise on career development [closed]

    - by Mike Young
    I am an amateur programmer working at a start-up. I didn't try coding at college. I've been working for 2 months now on web development. I'm satisfied with my progress. My project will go live soon. I work on front-end and my colleague integrates my work in his. So I decided to learn back-end technologies so that I would be able to work on a project from scratch, help my company build up. I recently got to know about the technologies used by fb and was fascinated to learn ,work on them,keep motivating myself. Now I want to work on building a product from scratch, be good at database concepts, a language like ruby or python, and get to know load balancing, dynamic requests from servers, hosting a website, real time communication, secured login, implementing sophisticated search feature for the app, using git by the end of the project.I would like to be a full stack developer in due course of time and learn everything in detail. I decide to keep myself out of time frame, learn every concept in detail.I would like to use both rdbms and non relational dbm for the project. I have no experience except some beginner knowledge in html5,css and JavaScript. I would like to get some advice on how to proceed forward step by step,flow what technologies to pick up and project idea which includes all the above.

    Read the article

  • Post build events using ROBOCOPY instead of XCOPY

    - by Vizioz Limited
    I don't know about you, but for a long time I have used XCOPY statements in my Visual Studio post build events to copy my Umbraco files from the project folders to the local version of the website associated with the project.For the last few months we have been building a website framework for a client, who has subsequently sold the site to 5 clients, each with a different skin and some variations in their functional requirements.So, we now have a single source solutions, that builds and copies the site files into 5 seperate local websites, which enables us to easily test them all, what we had found was that this process was starting to slow up our build process and was reaching 30-45 seconds on a high spec Quad core machine (and slower on others)Today I asked Colin to create seperate Solution Configurations within Visual Studio so that while we were developing we could target a single site, and when we wanted to test all sites, we could target "ALL" and the Post Build script would then copy the files to all sites.This worked well, and with a couple of other optimisations, our build was now taking about 10 seconds for a single site.Then Colin came across ROBOCOPY and suggested that maybe this would be a suitable alternative to XCOPY, well, I had not heard of it.. (shock horror some of you shout, some I am sure like me, are also wondering what it is!)ROBOCOPY is new in Windows Vista & Windows 7 (you can also download it for XP & Windows 2003) and it has a lot of additional features, the two that were most interesting to us were:/MIR = Mirror a folder tree/XD = Exclude Directories/NP = No Progress (i.e. it does not give you a chart of it's results, which just fills up your Output window!)So, we set about implementing ROBOCOPY, we decided to use the /MIR switch on all folders that we knew were always stored in our project folders:- images- css- masterpages- xsltAnd for other files we just used the straight robocopy functionality.We also decided to exclude all the .SVN directories using the /XD switch and finally we added the /NP switch as mentioned above.The beauty of all of this, is the /MIR functionality, as this means that only files that have changed will be copied across which greatly speeds up the process, especially on the images folders which previously copied across on every build, now, if we add a new image to the project it will be copied across automatically and then never again, unless we change it of course!The build time now for all sites is approximately 4 seconds and for a single site, 2 seconds, I would highly recommend the time to make the same optimisations to your build processes if you have not done so already.

    Read the article

  • Project Euler 51: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 51.  I know I started back up with Python this week, but I have three more Ruby solutions in the hopper and I wanted to share. For the record, Project Euler 51 was the second hardest Euler problem for me thus far. Yeah. As always, any feedback is welcome. # Euler 51 # http://projecteuler.net/index.php?section=problems&id=51 # By replacing the 1st digit of *3, it turns out that six # of the nine possible values: 13, 23, 43, 53, 73, and 83, # are all prime. # # By replacing the 3rd and 4th digits of 56**3 with the # same digit, this 5-digit number is the first example # having seven primes among the ten generated numbers, # yielding the family: 56003, 56113, 56333, 56443, # 56663, 56773, and 56993. Consequently 56003, being the # first member of this family, is the smallest prime with # this property. # # Find the smallest prime which, by replacing part of the # number (not necessarily adjacent digits) with the same # digit, is part of an eight prime value family. timer_start = Time.now require 'mathn' def eight_prime_family(prime) 0.upto(9) do |repeating_number| # Assume mask of 3 or more repeating numbers if prime.count(repeating_number.to_s) >= 3 ctr = 1 (repeating_number + 1).upto(9) do |replacement_number| family_candidate = prime.gsub(repeating_number.to_s, replacement_number.to_s) ctr += 1 if (family_candidate.to_i).prime? end return true if ctr >= 8 end end false end # Wanted to loop through primes using Prime.each # but it took too long to get to the starting value. n = 9999 while n += 2 next if !n.prime? break if eight_prime_family(n.to_s) end puts n puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Project Euler 53: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 53.  I first attempted to solve this problem using the Ruby combinations libraries. That didn’t work out so well. With a second look at the problem, the provided formula ended up being just the thing to solve the problem effectively. As always, any feedback is welcome. # Euler 53 # http://projecteuler.net/index.php?section=problems&id=53 # There are exactly ten ways of selecting three from five, # 12345: 123, 124, 125, 134, 135, 145, 234, 235, 245, # and 345 # In combinatorics, we use the notation, 5C3 = 10. # In general, # # nCr = n! / r!(n-r)!,where r <= n, # n! = n(n1)...321, and 0! = 1. # # It is not until n = 23, that a value exceeds # one-million: 23C10 = 1144066. # In general: nCr # How many, not necessarily distinct, values of nCr, # for 1 <= n <= 100, are greater than one-million timer_start = Time.now # There's no factorial method in Ruby, I guess. class Integer # http://rosettacode.org/wiki/Factorial#Ruby def factorial (1..self).reduce(1, :*) end end def combinations(n, r) n.factorial / (r.factorial * (n-r).factorial) end answer = 0 100.downto(3) do |c| (2).upto(c-1) { |r| answer += 1 if combinations(c, r) > 1_000_000 } end puts answer puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • 2D character controller in unity (trying to get old-school platformers back)

    - by Notbad
    This days I'm trying to create a 2D character controller with unity (using phisics). I'm fairly new to physic engines and it is really hard to get the control feel I'm looking for. I would be really happy if anyone could suggest solution for a problem I'm finding: This is my FixedUpdate right now: public void FixedUpdate() { Vector3 v=new Vector3(0,-10000*Time.fixedDeltaTime,0); _body.AddForce(v); v.y=0; if(state(MovementState.Left)) { v.x=-_walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=-_maxWalkSpeed; } else if(state(MovementState.Right)) { v.x= _walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=_maxWalkSpeed; } _body.velocity=v; Debug.Log("Velocity: "+_body.velocity); } I'm trying here to just move the rigid body applying a gravity and a linear force for left and right. I have setup a physic material that makes no bouncing and 0 friction when moving and 1 friction with stand still. The main problem is that I have colliders with slopes and the velocity changes from going up (slower) , going down the slope (faster) and walk on a straight collider (normal). How could this be fixed? As you see I'm applying allways same velocity for x axis. For the player I have it setup with a sphere at feet position that is the rigidbody I'm applying forces to. Any other tip that could make my life easier with this are welcomed :). P.D. While coming home I have noticed I could solve this applying a constant force parallel to the surface the player is walking, but don't know if it is best method.

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • Nice take on Open Source

    - by EmbeddedInsider
    I just revisited the “Micro Framework”- Microsoft’s bootable runtime, essentially an OS that allows managed code to run on small 32bit CPUs, even without Memory Management.  Things are happening http://msdn.microsoft.com/en-us/netframework/bb267253.aspx Abstract The Microsoft .NET Micro Framework is a bootable runtime module that brings the advantages of .NET programming to devices too resource-constrained to run other Microsoft embedded platforms. The benefits of developing with the .NET Micro Framework include the C# programming language, a managed execution environment, a substantial subset of the .NET libraries, and Visual Studio™ deployment and debugging. In this white paper we explain why the .NET Micro Framework is an ideal choice for embedded development and provide technical details of the platform’s Hardware Abstraction Layer (HAL) and Common Language Runtime (CLR). “Micro Framework” is an interesting product, it is very low cost, like zero. And it is largely community controlled under the Apache License.  A partner network is building, and the application environment is .NET. I have been following this for some time, and the community open source approach seems to be working.  There are new features/packages emerging, for example an F# programming language (ARGH! I am still wresting with VB and C#). Anyway, what I found most interesting was a port to Tron.  Tron is a very popular Japanese open source intuitive.  It is a very real time, very compact kernel, and is, like the Micro Framework, ‘free as beer’.  One limit on MF was it was not real time.  But the merger with Tron may eliminate that problem.  Certainly, if I were dealing with a consumer product with quantities in the millions (like a SmartGrid device, or a toy) I would seriously consider something out of this technology pool.

    Read the article

  • Are today's general purpose languages at the right level of abstarction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that he Bob Martin sees us detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

  • Are today's general purpose languages at the right level of abstraction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that Bob Martin sees us as detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   [Read more] 

    Read the article

  • Including Overestimates in MSF Agile Burndown Report

    After using the MSF Agile Burndown report for a few weeks in our new TFS 2010 environment, I have to say I am a huge fan.  I especially find the assignment of Work (hours) portion to be very useful in motivating the team to keep their tasks up to date every day.  Here is a view of the report that you get out of the box. However, I have one problem.  Id like the top line to have some more meaning.  Specifically, when it is changing is that an indication of scope creep, mis-estimation or a combination of the two.  So, today I decided to try to build in a view that would show overestimated time.  This would give me a more consistent top line.  My idea was to add another visual area on top of the graph whenever my originally estimated time was greater than the sum of completed and remaining.  This will effectively show me at least when the top line goes down whether it was scope change or over-estimation. Here is the final result. How did I do it?  Step 1: Add Cumulative_Original_Estimate field to the dsBurndown My approach was to follow the pattern where the completed time is included in the burndown chart and add my Overestimated hours.  First I added a field to the dsBurndown to hold the estimated time.         <Field Name="Cumulative_Original_Estimate">           <DataField><?xml version="1.0" encoding="utf-8"?><Field xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="Measure" UniqueName="[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]" /></DataField>           <rd:TypeName>System.Int32</rd:TypeName>         </Field> Step 2: Add a column to the query SELECT {     [Measures].[DateValue],     [Measures].[Work Item Count],     [Measures].[Microsoft_VSTS_Scheduling_RemainingWork],     [Measures].[Microsoft_VSTS_Scheduling_CompletedWork],     [Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate],     [Measures].[RemainingWorkLine],     [Measures].[CountLine] Step 3: Add a new Item to the QueryDefinition <Item> <ID xsi:type="Measure"> <MeasureName>Microsoft_VSTS_Scheduling_OriginalEstimate</MeasureName> <UniqueName>[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]</UniqueName> </ID> <ItemCaption>Cumulative Original Estimate</ItemCaption> <FormattedValue>true</FormattedValue> </Item> Step 4: Add a new ChartMember to DundasChartControl1 The burndown chart is called DundasChartControl1.  I need to add a ChartMember for the estimated time. <ChartMember>   <Label>Cumulative Original Estimate</Label> </ChartMember> Step 5: Add a ChartSeries to show the Overestimated Time <ChartSeries Name="OriginalEstimate">   <Hidden>=IIF(Parameters!YAxis.Value="count",True,False)</Hidden>   <ChartDataPoints>     <ChartDataPoint>       <ChartDataPointValues>         <Y>=IIF(Parameters!YAxis.Value = "hours", IIF(SUM(Fields!Cumulative_Original_Estimate.Value)>SUM(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value), SUM(Fields!Cumulative_Original_Estimate.Value-(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value)),Nothing),Nothing)</Y>       </ChartDataPointValues>       <ChartDataLabel>         <Style>           <FontFamily>Microsoft Sans Serif</FontFamily>           <FontSize>8pt</FontSize>         </Style>       </ChartDataLabel>       <Style>         <Border>           <Color>#9bdb00</Color>           <Width>0.75pt</Width>         </Border>         <Color>#666666</Color>         <BackgroundGradientEndColor>#666666</BackgroundGradientEndColor>       </Style>       <ChartMarker>         <Style />       </ChartMarker>       <CustomProperties>         <CustomProperty>           <Name>LabelStyle</Name>           <Value>Top</Value>         </CustomProperty>       </CustomProperties>     </ChartDataPoint>   </ChartDataPoints>   <Type>Area</Type>   <Subtype>Stacked</Subtype>   <Style />   <ChartEmptyPoints>     <Style>       <Color>#00ffffff</Color>     </Style>     <ChartMarker>       <Style />     </ChartMarker>     <ChartDataLabel>       <Style />     </ChartDataLabel>   </ChartEmptyPoints>   <LegendName>Default</LegendName>   <ChartItemInLegend>     <LegendText>Overestimated Hours</LegendText>   </ChartItemInLegend>   <ChartAreaName>Default</ChartAreaName>   <ValueAxisName>Primary</ValueAxisName>   <CategoryAxisName>Primary</CategoryAxisName>   <ChartSmartLabel>     <Disabled>true</Disabled>     <MaxMovingDistance>22.5pt</MaxMovingDistance>   </ChartSmartLabel> </ChartSeries> Thats it.  I find the improved report to add some value over the out of the box version.  You can download the updated rdl for the report here.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft MVP 2012 – ASP.NET/IIS

    - by hajan
    It’s Sunday. I wasn’t really sure whether I should expect something today or not, although its 1st of July when we all know that the new and re-awarded MVPs should get the ‘Congratulations’ email by Microsoft. And YES! I GOT IT! This is my second year, and first time re-awarded… Microsoft MVP 2012 The feeling is exactly same as the first time… I am honored, privileged, veeeery happy and thankful to Microsoft for this prestigious award! The past year was really great with all the events, speaking engagements in various conferences and camps, many other community activities and the first time visit at MVP Global Summit. I am looking forward to boost even more the Microsoft community activities in the next year... And… part of the email message: Dear Hajan Selmani, Congratulations! We are pleased to present you with the 2012 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in ASP.NET/IIS technical communities during the past year. I would like to say a big THANK YOU to all stakeholders. First of all, THANK YOU MICROSOFT for this prestigious award, Thanks to CEE & Italy Region MVP Lead, Alessandro Teglia, who did a great job by helping and supporting MVPs through the whole past year, I hope we will continue collaborating in the same way on the forthcoming year! Thanks to my family, friends, supports, followers, those who read my blogs regularly and have made me reach more than thousands of comments in my ASP.NET Blog :), those who collaborate and work with me on a daily basis and are supporting me in all my community activities. Thank You Everyone! There are lot of new, exciting, great and innovative technologies in the Microsoft Technology Stack. I am excited and really looking forward to rock the community in the years to come! THANK YOU! Hajan

    Read the article

  • Windows 7 not booting after installing Ubuntu 12.10

    - by Imran Choudhry
    I have Samsung sf511 with Windows 7 Home preinstalled. I installed Ubuntu 12.10 last night. First I mounted downloaded ubuntu 12.10 iso on a DVD from Windows 7, then I restarted, changed the boot option to DVD and let the Ubuntu DVD run. I selected "Install ubuntu alongside windows 7". In the next step I allocated the diskspace . Ubuntu installed perfectly and it ranned OK. I spent a good time exploring Ubuntu. When I restarted to start my work on Windows again and selected Windows 7 to boot, it showed blue screen after booting up (after logging in). I tried restarting several times but the same blue screen keep showing at the same point of bootup. I also restored windows to an earlier time. Left with no option I booted from Samsung provided windows recovery option. The recovery process ran fine but when it restarted to boot Windows for the first time all I see is a black screen. Nothing happens. It switches between dark command promt screen to a pitch black screen as if it's rebooting after every 10 seconds or so. The laptop can be booted from the DVD Drive or USB drive but is not booting from Harddisk. What should I do in order to use both Windows 7 and Ubuntu?

    Read the article

  • mySQL :syntax using in C#

    - by Meko
    Hi. I am using in C# MYsql .I have query that works if I run on MySql Workbench ,but in C# it does not return any value also does not give ant error too.There is only one different using on Mysql I use before table name databaseName.tableName , but in C# I think it doesn`t necessary. This is part of query which does not return anything. "(select Lesson_Name from schedule where Group_NO = (select Group_NO from sinif inner join student ON sinif.Group_ID=student.Group_ID where Student_Name=(?Student))"+ " And Day_Name =(select Day_Name from day inner join date ON day.Day_ID=date.DayName where Date=(?Date))" + "And Lesson_Time= (select Lesson_Time from clock where Lesson_Time <= (?Time)order by Lesson_Time DESC limit 0, 1) " + " And Week_NO = (select Week_NO from week inner join date ON week.Week_ID=date.Week_ID where Date=(?Date))) And Here all codes which executes when user click button. private void check_B_Click(object sender, EventArgs e) { connection.Open(); for (int i = 0; i < existingStudents.Count; i++) { MySqlCommand cmd1 = new MySqlCommand("select Student_Name,Student_Surname,Student_MacAddress from student ", connection); MySqlCommand cmd2 = new MySqlCommand("insert into check_list (Student,Mac_Address,Date,Time,Lesson_Name)"+ "values((?Student),(?MacAddress),(?Date),(?Time),"+ "(select Lesson_Name from schedule where Group_NO = (select Group_NO from sinif inner join student ON sinif.Group_ID=student.Group_ID where Student_Name=(?Student))"+ " And Day_Name =(select Day_Name from day inner join date ON day.Day_ID=date.DayName where Date=(?Date))" + "And Lesson_Time= (select Lesson_Time from clock where Lesson_Time <= (?Time)order by Lesson_Time DESC limit 0, 1) " + " And Week_NO = (select Week_NO from week inner join date ON week.Week_ID=date.Week_ID where Date=(?Date))))", connection); MySqlParameter param1 = new MySqlParameter(); param1.ParameterName = "?Student"; reader = cmd1.ExecuteReader(); if (reader.HasRows) while (reader.Read()) { if (reader["Student_MacAddress"].ToString() == existingStudentsMac[i].ToString()) param1.Value = reader["Student_Name" ]+" "+reader["Student_Surname"]; } reader.Close(); MySqlParameter param2 = new MySqlParameter(); param2.ParameterName = "?MacAddress"; param2.Value = existingStudentsMac[i]; MySqlParameter param3 = new MySqlParameter(); param3.ParameterName = "?Date"; param3.Value = DateTime.Today.Date; MySqlParameter param4 = new MySqlParameter(); param4.ParameterName = "?Time"; param4.Value = DateTime.Now.ToString("HH:mm"); cmd2.Parameters.Add(param1); cmd2.Parameters.Add(param2); cmd2.Parameters.Add(param3); cmd2.Parameters.Add(param4); cmd2.ExecuteNonQuery(); } connection.Close(); MessageBox.Show("Sucsess :)"); }

    Read the article

  • Using the JRockit Flight Recorder as an In-Flight Black Box

    - by Marcus Hirt
    The new JRockit Flight Recorder has some very interesting properties. It can be used like the black box of an airplane, allowing users to go back in time and check what was happening around the time when something went wrong. Here is how to enable the default continuous recording in JRockit to allow for that use case. The flight recorder is on by default in JRockit R28, the problem is that there is no recording running by default. To configure JRockit to start with the default recording running, add the parameter: -XX:FlightRecorderOptions=defaultrecording=true That will enable a recording with recording ID 0. You can see that it has been started properly by choosing Show Recordings from the context menu in JRockit Mission Control.   You should see something similar to the picture below. Simply right click on the recording and select dump to dump information available in the flight recorder. You can select to dump data for a specific period of time or all data. For more information about the command line parameters available to control the Flight Recorder, see the JRockit documentation.

    Read the article

  • pros and cons of taking an ABAP job

    - by sJhonny
    I'm a programmer with 3 years of .NET experience under my belt, and am currently looking for a new job. One of the options I'm considering is as an OO ABAP developer position with SAP. However, I have several concerns about taking an ABAP job: as ABAP is used exclusively by SAP, any experience in ABAP that I have would be irrelevant in the outside world. I'm also worried that I wouldn't be exposed to new technologies while working in ABAP, and ultimately I would lose touch with what's going on in the world. This is a real sore point, since I really enjoy exploring and learning new & cool stuff. (*note: Yes, I could experiment with other technologies & trends on my own time, but this is much harder to do, and isn't really the same as working full-time with them) One of the nicest things about programming, for me, is finding a great OO architecture / design (I'm really into object-oriented :)). I know that ABAP is a procedural language, and I'm not certain how 'OO' it's OO version is. This leads me to the conclusion that, unless I stay with SAP to the end of my career, any time spent there would be professionaly unbenificial. Is there anyone who can shed some light on these opinions? are my concerns founded? Are there any advantages (career and technology-wise) to ABAP that I'm missing?

    Read the article

  • Should I be worried about overengineering programming assignments given during interview process?

    - by DormoTheNord
    I recently had a phone interview with a company. After that phone interview, I was told to complete a short programming assignment (a small program; shouldn't take more than three hours). I'm only directly instructed to complete the assignment and turn in the code. I was given complete freedom to use any language I wished and was not told exactly how to turn in the code. Immediately I planned on throwing it on Github, writing a test suite for it, using Travis-CI (free continuous integration for public Github repositories) to run the test suites, and using CMake to build the Linux makefiles for Travis-CI. That way, not only can I demonstrate that I understand how to use Git, CMake, Travis-CI, and how to write tests, but I can also simply link to the Travis-CI page so they can see the output of the tests. I figured that'd make it a tiny bit more convenient for the interviewer. Since I know those technologies well, it would add essentially no time to the assignment. However, I'm a bit worried that doing all this for a relatively simple task would look bad. Although it wouldn't add much more time at all for me, I don't want them thinking I spend too much time on things that should be simple.

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Static objects and concurrency in a web application

    - by Ionut
    I'm developing small Java Web Applications on Tomcat server and I'm using MySQL as my database. Up until now I was using a connection singleton for accessing the database but I found out that this will ensure just on connection per Application and there will be problems if multiple users want to access the database in the same time. (They all have to make us of that single Connection object). I created a Connection Pool and I hope that this is the correct way of doing things. Furthermore it seems that I developed the bad habit of creating a lot of static object and static methods (mainly because I was under the wrong impression that every static object will be duplicated for every client which accesses my application). Because of this all the Service Classes ( classes used to handle database data) are static and distributed through a ServiceFactory: public class ServiceFactory { private static final String JDBC = "JDBC"; private static String impl; private static AccountService accountService; private static BoardService boardService; public static AccountService getAccountService(){ initConfig(); if (accountService == null){ if (impl.equalsIgnoreCase(JDBC)){ accountService = new JDBCAccountService(); } } return accountService; } public static BoardService getBoardService(){ initConfig(); if (boardService == null){ if (impl.equalsIgnoreCase(JDBC)){ boardService = new JDBCBoardService(); } } return boardService; } private static void initConfig(){ if (StringUtil.isEmpty(impl)){ impl = ConfigUtil.getProperty("service.implementation"); // If the config failed initialize with standard if (StringUtil.isEmpty(impl)){ impl = JDBC; } } } This was the factory class which, as you can see, allows just one Service to exist at any time. Now, is this a bad practice? What happens if let's say 1k users access AccountService simultaneously? I know that all this questions and bad practices come from a bad understanding of the static attribute in a web application and the way the server handles this attributes. Any help on this topic would be more than welcomed. Thank you for your time!

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >