Search Results

Search found 64639 results on 2586 pages for 'work stealing'.

Page 455/2586 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Exitus Acta Probat: The Post-Processing Module

    - by Phil Factor
    Sometimes, one has to make certain ethical compromises to ensure the success of a corporate IT project. Exitus Acta Probat (literally 'the result validates the deeds' meaning that the ends justify the means)It was a while back, whilst working as a Technical Architect for a well-known international company, that I was given the task of designing the architecture of a rather specialized accounting system. We'd tried an off-the-shelf (OTS) Windows-based solution which crashed with dispiriting regularity, and didn't quite do what the business required. After a great deal of research and planning, we commissioned a Unux-based system that used X-terminals for the desktops of  the participating staff. X terminals are now obsolete, but were then hot stuff; stripped-down Unix workstations that provided client GUIs for networked applications long before the days of AJAX, Flash, Air and DHTML. I've never known a project go so smoothly: I'd been initially rather nervous about going the Unix route, believing then that  Unix programmers were excitable creatures who were prone to  indulge in role-play enactments of elves and wizards at the weekend, but the programmers I met from the company that did the work seemed to be rather donnish, earnest, people who quickly grasped our requirements and were faultlessly professional in their work.After thinking lofty thoughts for a while, there was considerable pummeling of keyboards by our suppliers, and a beautiful robust application was delivered to us ahead of dates.Soon, the department who had commissioned the work received shiny new X Terminals to replace their rather depressing lavatory-beige PCs. I modestly hung around as the application was commissioned and deployed to the department in order to receive the plaudits. They didn't come. Something was very wrong with the project. I couldn't put my finger on the problem, and the users weren't doing any more than desperately and futilely searching the application to find a fault with it.Many times in my life, I've come up against a predicament like this: The roll-out of an application goes wrong and you are hearing nothing that helps you to discern the cause but nit-*** noise. There is a limit to the emotional heat you can pack into a complaint about text being in the wrong font, or an input form being slightly cramped, but they tried their best. The answer is, of course, one that every IT executive should have tattooed prominently where they can read it in emergencies: In Vino Veritas (literally, 'in wine the truth', alcohol loosens the tongue. A roman proverb) It was time to slap the wallet and get the department down the pub with the tab in my name. It was an eye-watering investment, but hedged with an over-confident IT director who relished my discomfort. To cut a long story short, The real reason gushed out with the third round. We had deprived them of their PCs, which had been good for very little from the pure business perspective, but had provided them with many hours of happiness playing computer-based minesweeper and solitaire. There is no more agreeable way of passing away the interminable hours of wage-slavery than minesweeper or solitaire, and the employees had applauded the munificence of their employer who had provided them with the means to play it. I had, unthinkingly, deprived them of it.I held an emergency meeting with our suppliers the following day. I came over big with the notion that it was in their interests to provide a solution. They played it cool, probably knowing that it was my head on the block, not theirs. In the end, they came up with a compromise. they would temporarily descend from their lofty, cerebral stamping grounds  in order to write a server-based Minesweeper and Solitaire game for X Terminals, and install it in a concealed place within the system. We'd have to pay for it, though. I groaned. How could we do that? "Could we call it a 'post-processing module?" suggested their account executive.And so it came to pass. The application was a resounding success. Every now and then, the staff were able to indulge in some 'post-processing', with what turned out to be a very fine implementation of both minesweeper and solitaire. There were several refinements: A single click in a 'boss' button turned the games into what looked just like a financial spreadsheet.  They even threw in a multi-user version of Battleships. The extra payment for the post-processing module went through the change-control process without anyone untoward noticing, and peace once more descended. Only one thing niggles. Those games were good. Do they still survive, somewhere in a Linux library? If so, I'd like to claim a small part in their production.

    Read the article

  • Help Prevent Carpal Tunnel Problems with Workrave

    - by Matthew Guay
    Whether for work or leisure, many of us spend entirely too much time on the computer everyday.  This puts us at risk of having or aggravating Carpal Tunnel problems, but thanks to Workrave you can help to divert these problems. Workrave helps Carpal Tunnel problems by reminding you to get away from your computer periodically.  Breaking up your computer time with movement can help alleviate many computer and office related health problems.  Workrave helps by reminding you to take short pauses after several minutes of computer use, and longer breaks after continued use.  You can also use it to keep from using the computer for too much You time in a day.  Since you can change the settings to suit you, this can be a great way to make sure you’re getting the breaks you need. Install Workrave on Windows If you’re using Workrave on Windows, download (link below) and install it with the default settings. One installation setting you may wish to change is the startup.  By default Workrave will run automatically when you start your computer; if you don’t want this, you can simply uncheck the box and proceed with the installation. Once setup is finished, you can run Workrave directly from the installer. Or you can open it from your start menu by entering “workrave” in the search box. Install Workrave in Ubuntu If you wish to use it in Ubuntu, you can install it directly from the Ubuntu Software Center.  Click the Applications menu, and select Ubuntu Software Center. Enter “workrave” into the search box in the top right corner of the Software Center, and it will automatically find it.  Click the arrow to proceed to Workrave’s page. This will give you information about Workrave; simply click Install to install Workrave on your system. Enter your password when prompted. Workrave will automatically download and install.   When finished, you can find Workrave in your Applications menu under Universal Access. Using Workrave Workrave by default shows a small counter on your desktop, showing the length of time until your next Micro break (30 second break), Rest break (10 minute break), and max amount of computer usage for the day. When it’s time for a micro break, Workrave will popup a reminder on your desktop. If you continue working, it will disappear at the end of the timer.  If you stop, it will start a micro-break which will freeze most on-screen activities until the timer is over.  You can click Skip or Postpone if you do not want to take a break right then. After an hour of work, Workrave will give you a 10 minute rest break.  During this it will show you some exercises that can help eliminate eyestrain, muscle tension, and other problems from prolonged computer usage.  You can click through the exercises, or can skip or postpone the break if you wish.   Preferences You can change your Workrave preferences by right-clicking on its icon in your system tray and selecting Preferences. Here you can customize the time between your breaks, and the length of your breaks.  You can also change your daily computer usage limit, and can even turn off the postpone and skip buttons on notifications if you want to make sure you follow Workrave and take your rests! From the context menu, you can also choose Statistics.  This gives you an overview of how many breaks, prompts, and more were shown on a given day.  It also shows a total Overdue time, which is the total length of the breaks you skipped or postponed.  You can view your Workrave history as well by simply selecting a date on the calendar.   Additionally, the Activity tab in the Statics pane shows more info about your computer usage, including total mouse movement, mouse button clicks, and keystrokes. Conclusion Whether you’re suffering with Carpal Tunnel or trying to prevent it, Workrave is a great solution to help remind you to get away from your computer periodically and rest.  Of course, since you can simply postpone or skip the prompts, you’ve still got to make an effort to help your own health.  But it does give you a great way to remind yourself to get away from the computer, and especially for geeks, this may be something that we really need! Download Workrave Similar Articles Productive Geek Tips Switch to the Dvorak Keyboard Layout in XPAccess Your MySQL Server Remotely Over SSHHow to Secure Gaim Instant Messenger traffic at Work with SecureCRT and SSHConnect to VMware Server Console Over SSHDisclaimers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • 2d tank movement and turret solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

  • How to Sync Any Folder With SkyDrive on Windows 8.1

    - by Chris Hoffman
    Before Windows 8.1, it was possible to sync any folder on your computer with SkyDrive using symbolic links. This method no longer works now that SkyDrive is baked into Windows 8.1, but there are other tricks you can use. Creating a symbolic link or directory junction inside your SkyDrive folder will give you an empty folder in your SkyDrive cloud storage. Confusingly, the files will appear inside the SkyDrive Modern app as if they were being synced, but they aren’t. The Solution With SkyDrive refusing to understand and accept symbolic links in its own folder, the best option is probably to use symbolic links anyway — but in reverse. For example, let’s say you have a program that automatically saves important data to a folder anywhere on your hard drive — whether it’s C:\Users\USER\Documents\, C:\Program\Data, or anywhere else. Rather than trying to trick SkyDrive into understanding a symbolic link, we could instead move the actual folder itself to SkyDrive and then use a symbolic link at the folder’s original location to trick the original program. This may not work for every single program out there. But it will likely work for most programs, which use standard Windows API calls to access folders and save files. We’re just flipping the old solution here — we can’t trick SkyDrive anymore, so let’s try to trick other programs instead. Moving a Folder and Creating a Symbolic Link First, ensure no program is using the external folder. For example, if it’s a program data or settings folder, close the program that’s using the folder. Next, simply move the folder to your SkyDrive folder. Right-click the external folder, select Cut, go to the SkyDrive folder, right-click and select Paste. The folder will now be located in the SkyDrive folder itself, so it will sync normally. Next, open a Command Prompt window as Administrator. Right-click the Start button on the taskbar or press Windows Key + X and select Command Prompt (Administrator) to open it. Run the following command to create a symbolic link at the original location of the folder: mklink /d “C:\Original\Folder\Location” “C:\Users\NAME\SkyDrive\FOLDERNAME\” Enter the correct paths for the exact location of the original folder and the current location of the folder in your SkyDrive. Windows will then create a symbolic link at the folder’s original location. Most programs should hopefully be tricked by this symbolic location, saving their files directly to SkyDrive. You can test this yourself. Put a file into the folder at its original location. It will be saved to SkyDrive and sync normally, appearing in your SkyDrive storage online. One downside here is that you won’t be able to save a file onto SkyDrive without it taking up space on the same hard drive SkyDrive is on. You won’t be able to scatter folders across multiple hard drives and sync them all. However, you could always change the location of the SkyDrive folder on Windows 8.1 and put it on a drive with a larger amount of free space. To do this, right-click the SkyDrive folder in File Explorer, select Properties, and use the options on the Location tab. You could even use Storage Spaces to combine the drives into one larger drive. Automatically Copy the Original Files to SkyDrive Another option would be to run a program that automatically copies files from another folder on your computer to your SkyDrive folder. For example, let’s say you want to sync copies of important log files that a program creates in a specific folder. You could use a program that allows you to schedule automatic folder-mirroring, configuring the program to regularly copy the contents of your log folder to your SkyDrive folder. This may be a useful alternative for some use cases, although it isn’t the same as standard syncing. You’ll end up with two copies of the files taking up space on your system, which won’t be ideal for large files. The files also won’t be instantly uploaded to your SkyDrive storage after they’re created, but only after the scheduled task runs. There are many options for this, including Microsoft’s own SyncToy, which continues to work on Windows 8. If you were using the symbolic link trick to automatically sync copies of PC game save files with SkyDrive, you could just install GameSave Manager. It can be configured to automatically create backup copies of your computer’s PC game save files on a schedule, saving them to SkyDrive where they’ll be synced and backed up online. SkyDrive support was completely rewritten for Windows 8.1, so it’s not surprising that this trick no longer works. The ability to use symbolic links in previous versions of SkyDrive was never officially supported, so it’s not surprising to see it break after a rewrite. None of the methods above are as convenient and quick as the old symbolic link method, but they’re the best we can do with the SkyDrive integration Microsoft has given us in Windows 8.1. It’s still possible to use symbolic links to easily sync other folders with competing cloud storage services like Dropbox and Google Drive, so you may want to consider switching away from SkyDrive if this feature is critical to you.     

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • Backpacks and Booth Paint: TechEd 2012

    - by The Un-T Guy
    Arriving in the parking lot of the Orange County Convention Center, I immediately knew I was in the right place. As far as the eye could see, the acres of asphalt were awash in backpacks, quirky (to be kind) outfits, and bad haircuts. This was the place. This was Microsoft Mecca v2012 for geeks and nerds, the Central Florida event of the year, a gathering of high tech professionals whose skills I both greatly respect and, frankly, fear a little. I was wholly and completely out of element, a dork in a vast sea of geek jumbo. It like was wearing dockers and a golf shirt walking into a RenFaire, but one with really crappy costumes and no turkey legs...save those attached to some of the attendees. Of course the corporate whores...errrr, vendors were in place, ready to parlay the convention's fre-nerd-ic energy into millions of dollars by convincing the big-brained and under-sexed in the crowd (i.e., virtually all of them...present company excluded, of course) that their product or service was the only thing standing between them and professional success, industry fame, and clear skin. "With KramTech 2012," they seemed to scream, "you will be THE ROCK STAR of your company's IT department!" As car shows and tattoo parlors learned long ago, Tech companies seem to believe that the best way to attract the attention of this crowd is through the hint of the promise of sex. They recruit and deploy an army of "sales reps" whose primary qualifications appear to be long hair, short skirts, high heels, and a vagina. Unlike their distant cousins in the car and body art industries, however, this sub-species of booth paint (semi-gloss decoration that adds nothing to the substance of the product) seems torn between committing to being all-out sex objects and recognition that they are in the presence of intelligent, discerning people. People who are smart enough to know exactly what these vendors are doing. Also unlike their distant car show and tattoo shop cousins, these young women (what…are there no gay tech professionals who could use some eye candy?) seem to realize that while IT remains a male-dominated field, there are ever-increasing numbers of intelligent, capable, strong professional women – women who’ve battled to make it in this field through hard work and work performance rather than a hard body and performing after work. This is not to say that all of the young female sales reps are there only because of their physical attributes. Many are competent, intelligent, and driven -- not to mention attractive. They're working hard on the front lines of delivering the next generation of technology. The distinction is pretty clear, however, between these young professionals and the booth paint. The former enthusiastically deliver credible information about the products they’re hawking. The latter are positioned in the aisles, uncomfortably avoiding eye contact as they struggle to operate the badge readers. Surprisingly, not all of the women in attendance seemed to object to the objectification of their younger sisters. One IT professional woman who came of age in the industry (mostly in IT marketing) said, “I have no problem with it. I was a ‘booth babe’ for years and it doesn’t bother me at all.” Others, however, weren’t quite so gracious. One woman I spoke with, an IT manager from Cheyenne, Wyoming, said it was demeaning and frankly, as more and more women grow into IT management positions, not a great marketing idea. “Using these young women is, to me, no different than vendors giving out t-shirts to attract attention. It’s sad because it’s still hard for a woman to be respected in the IT field and this just perpetuates the outdated notion that IT is a male-dominated field.” She went on to say that decisions by vendors to employ these young women in this “inappropriate way” could impact her purchasing decisions. “I might be swayed toward a vendor who has women on staff who are intelligent and dynamic rather than the vendors who use the ‘decoration’ girls.” So in many ways, the IT industry is no different than most other industries as it struggles to maximize performance by finding and developing talent – all of the talent, not just the 50% with a penis. Women in IT, like their brethren, struggle to find their niche in the field, to grow professionally, and reach for the brass ring, struggling to overcome obstacles as they climb the mountain of professional success in a never-ending cycle of economic uncertainty. But as (generally) well-educated and highly-trained professionals, they are probably better positioned than those in many other industries. Beside, they’ve got one other advantage over their non-IT counterparts as they attempt their ascent to the summit: They’ve already got the backpacks.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • How to nest transactions nicely - &quot;begin transaction&quot; vs &quot;save transaction&quot; and SQL Server

    - by Brian Biales
    Do you write stored procedures that might be used by others?  And those others may or may not have already started a transaction?  And your SP does several things, but if any of them fail, you have to undo them all and return with a code indicating it failed? Well, I have written such code, and it wasn’t working right until I finally figured out how to handle the case when we are already in a transaction, as well as the case where the caller did not start a transaction.  When a problem occurred, my “ROLLBACK TRANSACTION” would roll back not just my nested transaction, but the caller’s transaction as well.  So when I tested the procedure stand-alone, it seemed to work fine, but when others used it, it would cause a problem if it had to rollback.  When something went wrong in my procedure, their entire transaction was rolled back.  This was not appreciated. Now, I knew one could "nest" transactions, but the technical documentation was very confusing.  And I still have not found the approach below documented anywhere.  So here is a very brief description of how I got it to work, I hope you find this helpful. My example is a stored procedure that must figure out on its own if the caller has started a transaction or not.  This can be done in SQL Server by checking the @@TRANCOUNT value.  If no BEGIN TRANSACTION has occurred yet, this will have a value of 0.  Any number greater than zero means that a transaction is in progress.  If there is no current transaction, my SP begins a transaction. But if a transaction is already in progress, my SP uses SAVE TRANSACTION and gives it a name.  SAVE TRANSACTION creates a “save point”.  Note that creating a save point has no effect on @@TRANCOUNT.  So my SP starts with something like this: DECLARE @startingTranCount int SET @startingTranCount = @@TRANCOUNT IF @startingTranCount > 0 SAVE TRANSACTION mySavePointName ELSE BEGIN TRANSACTION -- … Then, when ready to commit the changes, you only need to commit if we started the transaction ourselves: IF @startingTranCount = 0 COMMIT TRANSACTION And finally, to roll back just your changes so far: -- Roll back changes... IF @startingTranCount > 0 ROLLBACK TRANSACTION MySavePointName ELSE ROLLBACK TRANSACTION Here is some code that you can try that will demonstrate how the save points work inside a transaction. This sample code creates a temporary table, then executes selects and updates, documenting what is going on, then deletes the temporary table. if running in SQL Management Studio, set Query Results to: Text for best readability of the results. -- Create a temporary table to test with, we'll drop it at the end. CREATE TABLE #ATable( [Column_A] [varchar](5) NULL ) ON [PRIMARY] GO SET NOCOUNT ON -- Ensure just one row - delete all rows, add one DELETE #ATable -- Insert just one row INSERT INTO #ATable VALUES('000') SELECT 'Before TRANSACTION starts, value in table is: ' AS Note, * FROM #ATable SELECT @@trancount AS CurrentTrancount --insert into a values ('abc') UPDATE #ATable SET Column_A = 'abc' SELECT 'UPDATED without a TRANSACTION, value in table is: ' AS Note, * FROM #ATable BEGIN TRANSACTION SELECT 'BEGIN TRANSACTION, trancount is now ' AS Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '123' SELECT 'Row updated inside TRANSACTION, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION MySavepoint SELECT 'Save point MySavepoint created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '456' SELECT 'Updated after MySavepoint created, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION point2 SELECT 'Save point point2 created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '789' SELECT 'Updated after point2 savepoint created, value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION point2 SELECT 'Just rolled back savepoint "point2", value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION MySavepoint SELECT 'Just rolled back savepoint "MySavepoint", value in table is: ' AS Note, * FROM #ATable SELECT 'Both save points were rolled back, transaction count still:' as Note, @@TRANCOUNT AS TranCount ROLLBACK TRANSACTION SELECT 'Just rolled back the entire transaction..., value in table is: ' AS Note, * FROM #ATable DROP TABLE #ATable The output should look like this: Note                                           Column_A ---------------------------------------------- -------- Before TRANSACTION starts, value in table is:  000 CurrentTrancount ---------------- 0 Note                                               Column_A -------------------------------------------------- -------- UPDATED without a TRANSACTION, value in table is:  abc Note                                 TranCount ------------------------------------ ----------- BEGIN TRANSACTION, trancount is now  1 Note                                                Column_A --------------------------------------------------- -------- Row updated inside TRANSACTION, value in table is:  123 Note                                                   TranCount ------------------------------------------------------ ----------- Save point MySavepoint created, transaction count now: 1 Note                                                   Column_A ------------------------------------------------------ -------- Updated after MySavepoint created, value in table is:  456 Note                                              TranCount ------------------------------------------------- ----------- Save point point2 created, transaction count now: 1 Note                                                        Column_A ----------------------------------------------------------- -------- Updated after point2 savepoint created, value in table is:  789 Note                                                     Column_A -------------------------------------------------------- -------- Just rolled back savepoint "point2", value in table is:  456 Note                                                          Column_A ------------------------------------------------------------- -------- Just rolled back savepoint "MySavepoint", value in table is:  123 Note                                                        TranCount ----------------------------------------------------------- ----------- Both save points were rolled back, transaction count still: 1 Note                                                            Column_A --------------------------------------------------------------- -------- Just rolled back the entire transaction..., value in table is:  abc

    Read the article

  • Top Reasons You Need A User Engagement Platform

    - by Michael Snow
    Guest post by: Amit Sircar, Senior Sales Consultant, Oracle Deliver complex enterprise functionality through a simple intuitive and unified User Interface (UI) The modern enterprise contains a wide range of applications that are used to manage the business and drive competitive advantages. Organizations respond by creating a complex structure that results in a functional and management grouping of users. Each of these groups of users requires access to multiple applications and information sources in order to perform their job functions. This leads to the lack of a unified view of enterprise information, inconsistent user interfaces and disjointed security. To be effective, portals must be designed from the end-user perspective, enabling the user to accomplish as many tasks as possible while visiting the fewest number of portals. This requires rethinking the way that portals are built, moving from a functional business unit perspective to a user-focused, process-oriented point of view. Oracle WebCenter provides the Common User Experience Architecture that allows organizations to seamlessly present a unified view of enterprise information tailored to a particular user’s role and preferences. This architecture provides the best practices, design patterns and delivery mechanism for myriad services, applications, and data sources.  In order to serve as a primary system of access, Oracle WebCenter also provides access to unstructured content and to other users via integrated search, service-oriented artifacts, content management, and collaboration tools. Provide a modern and engaging experience without modifying the core business application Web 2.0 technologies such as blogs, wikis, forums or social media sites are having a profound impact in the public internet.  These technologies can be leveraged by enterprises to add significant value to the business. Organizations need to integrate these technologies directly into their business applications while continuing to meet their security and governance needs. To deliver richer connections and become a more agile and intelligent business, WebCenter provides an enterprise portal platform that contains pre-integrated, standards-based Enterprise 2.0 services. These Enterprise 2.0 services can be easily accessed, integrated and utilized by users. By giving users the ability to use and integrate Enterprise 2.0 services such as tags, links, wikis, activities, blogs or social networking directly with their portals and applications, they are empowered to make richer connections, optimize their productivity, and ultimately increase the value of their applications. Foster a collaborative experience The organizational workplace has undergone a major change in the last decade. With increasing globalization and a distributed workforce, project teams may be physically separated by large distances. Online collaboration technologies are becoming a critical resource to enable virtual teams to share information and work together effectively. Oracle WebCenter delivers dynamic business communities with rich Services to empower teams to quickly and efficiently manage their information, applications, projects, and people without requiring IT assistance. It brings together the latest technology around Enterprise 2.0 and social computing, communities, personal productivity, and ad-hoc team interactions without any development effort. It enables the sharing and collaboration on team content, focusing an organization’s valuable resources on solving business problems, tapping into new ideas, and reducing time-to-market. Mobile Support The traditional workplace dynamics that required employees to access their work applications from their desktops have undergone a fundamental shift. Employees were used to primarily working from company offices and utilized an IT-issued computer for performing their job functions. With the introduction of flexible work hours and the growth of remote workers, more and more employees need the ability to remain productive even when they do not have access to a computer via the use of tablets and smartphones.  In addition, customers and citizens have come to expect 24x7 access to resources and websites from wherever they are located. Tablets and smartphones have empowered everyone to quickly access services they need anytime and from any place.  WebCenter provides out of the box capabilities to deliver the mobile experience in a seamless manner. Seeded device profiles and toolkits within WebCenter can be used to render the same web pages into multiple target devices such iPads, iPhones and android devices. Web designers can preview the portal using the built in simulator, make necessary updates and then deploy their UI design for the targeted device. Conclusion The competitive economy and resource constraints facing organizations today require them to find ways to make their applications, portals and Web sites more agile and intelligent and their knowledge workers more productive no matter where they are located. Organizations need to provide faster access to relevant information and resources, enhance existing applications and business processes with rich Enterprise 2.0 services, and seamlessly deliver content to mobile platforms. Oracle WebCenter successfully meets these challenges by providing the modern user experience platform for the enterprise and the Web.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Delving into design patterns, and what that means for the Oracle user experience

    - by Kathy.Miedema
    By Kathy Miedema, Oracle Applications User Experience George Hackman, Senior Director, Applications User Experiences The Oracle Applications User Experience team has some exciting things happening around Fusion Applications design patterns. Because we’re hoping to have some new offerings soon (stay tuned with VoX to see what’s in the pipeline around Fusion Applications design patterns), now is a good time to talk more about what design patterns can do for the individual user as well as the entire company. George Hackman, Senior Director of Operations User Experience, says the first thing to note is that user experience is not just about the user interface. It’s about understanding how people do things, observing them, and then finding the patterns that emerge. The Applications UX team develops those patterns and then builds them into Oracle applications. What emerges, Hackman says, is a consistent, efficient user experience that promotes a productive workplace. Creating design patterns What is a design pattern in the context of enterprise software? “Every day, people use technology to get things done,” Hackman says. “They navigate a virtual world that reaches from enterprise to consumer apps, and from desktop to mobile. This virtual world is constantly under construction. New areas are being developed and old areas are being redone. As this world is being built and remodeled, efficient pathways and practices emerge. “Oracle's user experience team watches users navigate this world. We measure their productivity and ask them about their satisfaction. We take the most efficient, most productive pathways from the enterprise and consumer world and turn them into Oracle's user experience patterns.” Hackman describes the process as combining all of the best practices from every part of a user’s world. Members of the user experience team observe, analyze, design, prototype, and measure each work task to find the best possible pattern for a particular work flow. As the team builds the patterns, “we make sure they are fully buildable using Oracle technology,” Hackman said. “So customers know they can use these patterns. There’s no need to make something up from scratch, not knowing whether you can even build it.” Hackman says that creating something on a computer is a good example of a user experience pattern. “People are creating things all the time,” he says. “On the consumer side, they are creating documents. On the enterprise side, they are creating expense reports. On a mobile phone, they are creating contacts. They are using different apps like iPhone or Facebook or Gmail or Oracle software, all doing this creation process.” The Applications UX team starts their process by observing how people might create something. “We observe people creating things. We see the patterns, we analyze and document, then we apply them to our products. It might be different from phone to web browser, but we have these design patterns that create a consistent experience across platforms, and across products, too. The result for customers Oracle constantly improves its part of the virtual world, Hackman said. New products are created and existing products are upgraded. Because Oracle builds user experience design patterns, Oracle's virtual world becomes both more powerful and more familiar at the same time. Because of design patterns, users can navigate with ease as they embrace the latest technology – because it behaves the way they expect it to. This means less training and faster adoption for individual users, and more productivity for the business as a whole. Hackman said Oracle gives customers and partners access to design patterns so that they can build in the virtual world using the same best practices. Customers and partners can extend applications with a user experience that is comfortable and familiar to their users. For businesses that are integrating different Oracle applications, design patterns are key. The user experience created in E-Business Suite should be similar to the user experience in Fusion Applications, Hackman said. If a user is transitioning from one application to the other, it shouldn’t be difficult for them to do their work. With design patterns, it isn’t. “Oracle user experience patterns are the building blocks for the virtual world that ensure productivity, consistency and user satisfaction,” Hackman said. “They are built for the enterprise, but incorporate the best practices from across the virtual world. They empower productivity and facilitate social interaction. When you build with patterns, you get all the end-user benefits of less training / retraining from the finished product. You also get faster / cheaper development.” What’s coming? You can already access design patterns to help you build Dashboards with OBIEE here. And we promised you at the beginning that we had something in the pipeline on Fusion Applications design patterns. Look for the announcement about when they are available here on VoX.

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • IE9 not rendering box-shadow Elements inside of Table Cells

    - by Rick Strahl
    Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported. Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow. Below are images from Chrome (left) and IE 9 (right) of the same content: The download and purchase images are rendered with: <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> where the .boxshadow and .roundbox styles look like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } And the Problem is… collapsed Table Borders Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.<td class="sidebar" style="border-collapse: collapse"> … <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> …</td> This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:table { border-collapse: collapse; border-spacing: 0; } which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed. You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here. How to work around this Issue To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:<td class="sidebar" style="border-collapse: separate; border-width: 0;"> Or better yet (thanks to David's comment below), you can add the border-collapse: separate to the .boxshadow style like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; border-collapse: separate; } With either of these approaches IE renders the shadows correctly. Do you really need border-collapse? Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders. As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering. There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place. Resources Sample HTML document that demonstrates the behavior Run the Sample© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML  Internet Explorer   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Play PC Games on Your TV

    - by Chris Hoffman
    No need to wait for Valve’s Steam Machines — connect your Windows gaming PC to your TV and use powerful PC graphics in the living room today. It’s easy — you don’t need any unusual hardware or special software. This is ideal if you’re already a PC gamer who wants to play your games on a larger screen. It’s also convenient if you want to play multiplayer PC games with controllers in your living rom. HDMI Cables and Controllers You’ll need an HDMI cable to connect your PC to your television. This requires a TV with HDMI-in, a PC with HDMI-out, and an HDMI cable. Modern TVs and PCs have had HDMI built in for years, so you should already be good to go. If you don’t have a spare HDMI cable lying around, you may have to buy one or repurpose one of your existing HDMI cables. Just don’t buy the expensive HDMI cables — even a cheap HDMI cable will work just as well as a more expensive one. Plug one end of the HDMI cable into the HDMI-out port on your PC and one end into the HDMI-In port on your TV. Switch your TV’s input to the appropriate HDMI port and you’ll see your PC’s desktop appear on your TV.  Your TV becomes just another external monitor. If you have your TV and PC far away from each other in different rooms, this won’t work. If you have a reasonably powerful laptop, you can just plug that into your TV — or you can unplug your desktop PC and hook it up next to your TV. Now you’ll just need an input device. You probably don’t want to sit directly in front of your TV with a wired keyboard and mouse! A wireless keyboard and wireless mouse can be convenient and may be ideal for some games. However, you’ll probably want a game controller like console players use. Better yet, get multiple game controllers so you can play local-multiplayer PC games with other people. The Xbox 360 controller is the ideal controller for PC gaming. Windows supports these controllers natively, and many PC games are designed specifically for these controllers. Note that Xbox One controllers aren’t yet supported on Windows because Microsoft hasn’t released drivers for them. Yes, you could use a third-party controller or go through the process of pairing a PlayStation controller with your PC using unofficial tools, but it’s better to get an Xbox 360 controller. Just plug one or more Xbox controllers into your PC’s USB ports and they’ll work without any setup required. While many PC games to support controllers, bear in mind that some games require a keyboard and mouse. A TV-Optimized Interface Use Steam’s Big Picture interface to more easily browse and launch games. This interface was designed for using on a television with controllers and even has an integrated web browser you can use with your controller. It will be used on the Valve’s Steam Machine consoles as the default TV interface. You can use a mouse with it too, of course. There’s also nothing stopping you from just using your Windows desktop with a mouse and keyboard — aside from how inconvenient it will be. To launch Big Picture Mode, open Steam and click the Big Picture button at the top-right corner of your screen. You can also press the glowing Xbox logo button in the middle of an Xbox 360 Controller to launch the Big Picture interface if Steam is open. Another Option: In-Home Streaming If you want to leave your PC in one room of your home and play PC games on a TV in a different room, you can consider using local streaming to stream games over your home network from your gaming PC to your television. Bear in mind that the game won’t be as smooth and responsive as it would if you were sitting in front of your PC. You’ll also need a modern router with fast wireless network speeds to keep up with the game streaming. Steam’s built-in In-Home Streaming feature is now available to everyone. You could plug a laptop with less-powerful graphics hardware into your TV and use it to stream games from your powerful desktop gaming rig. You could also use an older desktop PC you have lying around. To stream a game, log into Steam on your gaming PC and log into Steam with the same account on another computer on your home network. You’ll be able to view the library of installed games on your other PC and start streaming them. NVIDIA also has their own GameStream solution that allows you to stream games from a PC with powerful NVIDIA graphics hardware. However, you’ll need an NVIDIA Shield handheld gaming console to do this. At the moment, NVIDIA’s game streaming solution can only stream to the NVIDIA Shield. However, the NVIDIA Shield device can be connected to your TV so you can play that streaming game on your TV. Valve’s Steam Machines are supposed to bring PC gaming to the living room and they’ll do it using HDMI cables, a custom Steam controller, the Big Picture interface, and in-home streaming for compatibility with Windows games. You can do all of this yourself today — you’ll just need an Xbox 360 controller instead of the not-yet-released Steam controller. Image Credit: Marco Arment on Flickr, William Hook on Flickr, Lewis Dowling on Flickr

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >