Search Results

Search found 4343 results on 174 pages for 'man in the middle'.

Page 155/174 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • View Docs and PDFs Directly in Google Chrome

    - by Matthew Guay
    Would you like to view documents, presentations, and PDFs directly in Google Chrome?  Here’s a handy extension that makes Google Docs your default online viewer so don’t have to download the file first. Getting Started By default, when you come across a PDF or other common document file online in Google Chrome, you’ll have to download the file and open it in a separate application. It’d be much easier to simply view online documents directly in Chrome.  To do this, head over to the Docs PDF/PowerPoint Viewer page on the Chrome Extensions site (link below), and click Install to add it to your browser. Click Install to confirm that you want to install this extension. Extensions don’t run by default in Incognito mode, so if you’d like to always view documents directly in Chrome, open the Extensions page and check Allow this extension to run in incognito. Now, when you click a link for a document online, such as a .docx file from Word, it will open in the Google Docs viewer. These documents usually render in their original full-quality.  You can zoom in and out to see exactly what you want, or search within the document.  Or, if it doesn’t look correct, you can click the Download link in the top left to save the original document to your computer and open it in Office.   Even complex PDF render very nicely.  Do note that Docs will keep downloading the document as you’re reading it, so if you jump to the middle of a document it may look blurry at first but will quickly clear up. You can even view famous presentations online without opening them in PowerPoint.  Note that this will only display the slides themselves, but if you’re looking for information you likely don’t need the slideshow effects anyway.   Adobe Reader Conflicts If you already have Adobe Acrobat or Adobe Reader installed on your computer, PDF files may open with the Adobe plugin.  If you’d prefer to read your PDFs with the Docs PDF Viewer, then you need to disable the Adobe plugin.  Enter the following in your Address Bar to open your Chrome Plugins page: chrome://plugins/ and then click Disable underneath the Adobe Acrobat plugin. Now your PDFs will always open with the Docs viewer instead. Performance Who hasn’t been frustrated by clicking a link to a PDF file, only to have your browser pause for several minutes while Adobe Reader struggles to download and display the file?  Google Chrome’s default behavior of simply downloading the files and letting you open them is hardly more helpful.  This extension takes away both of these problems, since it renders the documents on Google’s servers.  Most documents opened fairly quickly in our tests, and we were able to read large PDFs only seconds after clicking their link.  Also, the Google Docs viewer rendered the documents much better than the HTML version in Google’s cache. Google Docs did seem to have problem on some files, and we saw error messages on several documents we tried to open.  If you encounter this, click the Download link in the top left corner to download the file and view it from your desktop instead. Conclusion Google Docs has improved over the years, and now it offers fairly good rendering even on more complex documents.  This extension can make your browsing easier, and help documents and PDFs feel more like part of the Internet.  And, since the documents are rendered on Google’s servers, it’s often faster to preview large files than to download them to your computer. Link Download the Docs PDF/PowerPoint Viewer extension from Google Similar Articles Productive Geek Tips Integrate Google Docs with Outlook the Easy WayGoogle Image Search Quick FixView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeHow To Export Documents from Google Docs to Your Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Why is my external USB hard drive sometimes completely inaccessible?

    - by Eliah Kagan
    I have an external USB hard drive, consisting of an 1 TB SATA drive in a Rosewill RX35-AT-SU SLV Aluminum 3.5" Silver USB 2.0 External Enclosure, plugged into my SONY VAIO VGN-NS310F laptop. It is plugged directly into the computer (not through a hub). The drive inside the enclosure is a 7200 rpm Western Digital, but I don't remember the exact model. I can remove the drive from the enclosure (again), if people think it's necessary to know that detail. The drive is formatted ext4. I mount it dynamically with udisks on my Lubuntu 11.10 system, usually automatically via PCManFM. (I have had Lubuntu 12.04 on this machine, and experienced all this same behavior with that too.) Every once in a while--once or twice a day--it becomes inaccessible, and difficult to unmount. Attempting to unmount it with sudo umount ... gives an error message saying the drive is in use and suggesting fuser and lsof to find out what is using it. Killing processes found to be using the drive with fuser and lsof is sometimes sufficient to let me unmount it, but usually isn't. Once the drive is unmounted or the machine is rebooted, the drive will not mount. Plugging in the drive and turning it on registers nothing on the computer. dmesg is unchanged. The drive's access light usually blinks vigorously, as though the drive is being accessed constantly. Then eventually, after I keep the drive off for a while (half an hour), I am able to mount it again. While the drive doesn't work on this machine for a while, it will work immediately on another machine running the same version of Ubuntu. Sometimes bringing it back over from the other machine seems to "fix" it. Sometimes it doesn't. The drive doesn't always stop being accessible while mounted, before becoming unmountable. Sometimes it works fine, I turn off the computer, I turn the computer back on, and I cannot mount the drive. Currently this is the only drive with which I have this problem, but I've had problems that I think are the same as this, with different drives, on different Ubuntu machines. This laptop has another external USB drive plugged into it regularly, which doesn't have this problem. Unplugging that drive before plugging in the "problem" drive doesn't fix the problem. I've opened the drive up and made sure the connections were tight in the past, and that didn't seem to help (any more than waiting the same amount of time that it took to open and close the drive, before attempting to remount it). Does anyone have any ideas about what could be causing this, what troubleshooting steps I should perform, and/or how I could fix this problem altogether? Update: I tried replacing the USB data cable (from the enclosure to the laptop), as Merlin suggested. I should've tried that long ago, since it fits the symptoms perfectly (the drive works on another machine, which would make sense because the cable would be bent at a different angle, possibly completing a circuit of frayed wires). Unfortunately, though, this did not help--I have the same problem with the new cable. I'll try to provide additional detailed information about the drive inside the enclosure, next time I'm able to get the drive working. (At the moment I don't have another machine available to attach it.) Major Update (28 June 2012) The drive seems to have deteriorated considerably. I think this is so, because I've attached it to another machine and gotten lots of errors about invalid characters, when copying files from it. I am less interested in recovering data from the drive than I am in figuring out what is wrong with it. I specifically want to figure out if the problem is the drive or the enclosure. Now, when I plug the drive into the original machine where I was having the problems, it still doesn't appear (including with sudo fdisk -l), but it is recognized by the kernel and messages are added to dmesg. Most of the message consist of errors like this, repeated many times: [ 7.707593] sd 5:0:0:0: [sdc] Unhandled sense code [ 7.707599] sd 5:0:0:0: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE [ 7.707606] sd 5:0:0:0: [sdc] Sense Key : Medium Error [current] [ 7.707614] sd 5:0:0:0: [sdc] Add. Sense: Unrecovered read error [ 7.707621] sd 5:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 7.707636] end_request: critical target error, dev sdc, sector 0 [ 7.707641] Buffer I/O error on device sdc, logical block 0 Here are all the lines from dmesg starting with when the drive is recognized. Please note that: I'm back to running Lubuntu 12.04 on this machine (and perhaps that's a factor in better error messages). Now that the drive has been plugged into another machine and back into this one, and also now that this machine is back to running 12.04, the drive's access light doesn't blink as I had described. Looking at the drive, it would appear as though it is working normally, with low or no access. This behavior (the errors) occurs when rebooting the machine with the drive plugged in, and also when manually plugging in the drive. A few of the messages are about /dev/sdb. That drive is working fine. The bad drive is /dev/sdc. I just didn't want to edit anything out from the middle.

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • Justifiable Perks.

    - by Phil Factor
        I was once the director of a start-up IT Company, and had the task of recruiting a proportion of the management team. As my background was in IT management, I was rather more familiar with recruiting Geeks for technology jobs, but here, one of my early tasks was interviewing a Marketing Director.  The small group of financiers had suggested a rather strange Irishman called  Halleran.  From my background in City of London dealing-rooms, I was slightly unprepared for the experience of interviewing anyone wearing a pink suit. Many of my older City colleagues would have required resuscitation after seeing his white leather shoes. However, nobody will accuse me of prejudging an interviewee. After all, many Linux experts who I’ve come to rely on have appeared for interview dressed as hobbits. In fact, the interview went well, and we had even settled his salary.  I was somewhat unprepared for the coda.    ‘And I will need to be provided with a Ferrari  by the company.’    ‘Hmm. That seems reasonable.’    Initially, he looked startled, and then a slow smile of victory spread across his face.    ‘What colour would you like?’ I asked genially.    ‘It has to be red.’ He looked very earnest on this point.    ‘Fine. I have to go past Hamleys on the way home this evening, so I’ll pick one up then for you.’    ‘Er.. Hamley’s is a toyshop, not a Ferrari Dealership.’    I stared at him in bafflement for a few seconds. ‘You’re not seriously asking for a real Ferrari are you?’     ‘Well, yes. Not for my own sake, you understand. I’d much prefer a simple run-about, but my position demands it. How could I maintain the necessary status in the office without one? How could I do my job in marketing when my grey Datsun was all too visible in the car Park? It is a tool of the job.’    ‘Excuse me a moment, but I must confer with the MD’    I popped out to see Chris, the MD. ‘Chris, I’m interviewing a lunatic in a pink suit who is trying to demand that a Ferrari is a precondition of his employment. I tried the ‘misunderstanding trick’ but it didn’t faze him.’     ‘Sorry, Phil, but we’ve got to hire him. The VCs insist on it. You’ve got to think of something that doesn’t involve committing to the purchase of a Ferrari. Current funding barely covers the rent for the building.’    ‘OK boss. Leave it to me.’    On return, I slapped O’Halleran’s file on the table with a genial, paternalistic smile. ‘Of course you should have a Ferrari. The only trouble is that it will require a justification document that can be presented to the board. I’m sure you’ll have no problem in preparing this document in the required format.’ The initial look of despair was quickly followed by a bland look of acquiescence. He had, earlier in the interview, argued with great eloquence his skill in preparing the tiresome documents that underpin the essential corporate and government deals that were vital to the success of this new enterprise. The justification of a Ferrari should be a doddle.     After the interview, Chris nervously asked how I’d fared.     ‘I think it is all solved.’    ‘… without promising a Ferrari, I hope.’    ‘Well, I did actually; on condition he justified it in writing.’    Chris issued a stream of invective. The strain of juggling the resources in an underfunded startup was beginning to show.    ‘Don’t worry. In the unlikely event of him coming back with the required document, I’ll give him mine.’    ‘Yours?’ He strode over to the window to stare down at the car park.    He needn’t have worried: I knew that his breed of marketing man could more easily lay an ostrich egg than to prepare a decent justification document. My Ferrari is still there at the back of my garage. Few know of the Ferrari cultivator, a simple inexpensive motorized device designed for the subsistence farmers of southern Italy. It is the very devil to start, but it creates a perfect tilth for the seedbed.

    Read the article

  • Day 4 - Game Sprites In Action

    - by dapostolov
    Yesterday I drew an image on the screen. Most exciting, but ... I spent more time blogging about it then actual coding. So this next little while I'm going to streamline my game and research and simply post key notes. Quick notes on the last session: The most important thing I wanted to point out were the following methods:           spriteBatch.Begin(SpriteBlendMode.AlphaBlend);           spriteBatch.Draw(sprite, position, Color.White);           spriteBatch.End(); The spriteBatch object is used to draw Textures and a 2D texture is called a Sprite A texture is generally an image, which is called an Asset in XNA The Draw Method in the Game1.cs is looped (until exit) and utilises the spriteBatch object to draw a Scene To begin drawing a Scene you call the Begin Method. To end a Scene you call the End Method. And to place an image on the Scene you call the Draw method. The most simple implementation of the draw method is:           spriteBatch.Draw(sprite, position, Color.White); 1) sprite - the 2D texture you loaded to draw 2) position - the 2d vector, a set of x & y coordinates 3) Color.White - the tint to apply to the texture, in this case, white light = nothing, nada, no tint. Game Sprites In Action! Today, I played around with Draw methods to get comfortable with their "quirks". The following is an example of the above draw method, but with more parameters available for us to use. Let's investigate!             spriteBatch.Draw(sprite, position2, null, Color.White, MathHelper.ToRadians(45.0f), new Vector2(sprite.Width / 2, sprite.Height / 2), 1.0F, SpriteEffects.None, 0.0F); The parameters (in order): 1) sprite  the texture to display 2) position2 the position on the screen / scene this can also be a rectangle 3) null the portion of the image to display within an image null = display full image this is generally used for animation strips / grids (more on this below) 4) Color.White Texture tinting White = no tint 5) MathHelper.ToRadians(45.0f) rotation of the object, in this case 45 degrees rotates from the set plotting point. 6) new Vector(0,0) the plotting point in this case the top left corner the image will rotate from the top left of the texture in the code above, the point is set to the middle of the image. 7) 1.0f Image scaling (1x) 8) SpriteEffects.None you can flip the image horizontally or vertically 9) 0.0f The z index of the image. 0 = closer, 1 behind? And playing around with different combinations I was able to come up with the following whacky display:   Checking off Yesterdays Intention List: learn game development terminology (in progress) - We learned sprite, scene, texture, and asset. how to place and position (rotate) a static image on the screen (completed) - The thing to note was, it's was in radians and I found a cool helper method to convert degrees into radians. Also, the image rotates from it's specified point. how to layer static images on the screen (completed) - I couldn't seem to get the zIndex working, but one things for sure, the order you draw the image in also determines how it is rendered on the screen. understand image scaling (in progress) - I'm not sure I have this fully covered, but for the most part plug a number in the scaling field and the image grows / shrinks accordingly. can we reuse images? (completed) - yes, I loaded one image and plotted the bugger all over the screen. understand how framerate is handled in XNA (in progress) - I hacked together some code to display the framerate each second. A framerate of 60 appears to be the standard. Interesting to note, the GameTime object does provide you with some cool timing capabilities, such as...is the game running slow? Need to investigate this down the road. how to display text , basic shapes, and colors on the screen (in progress) - i got text rendered on the screen, and i understand containing rectangles. However, I didn't display "shapes" & "colors" how to interact with an image (collision of user input?) (todo) how to animate an image and understand basic animation techniques (in progress) - I was able to create a stripe animation of numbers ranging from 1 - 4, each block was 40 x 40 pixles for a total stripe size of 160 x 40. Using the portion (source Rectangle) parameter, i limited this display to each section at varying intervals. It was interesting to note my first implementation animated at rocket speed. I then tried to create a smoother animation by limiting the redraw capacity, which seemed to work. I guess a little more research will have to be put into this for animating characters / scenes. how to detect colliding images or screen edges (todo) - but the rectangle object can detect collisions I believe. how to manipulate the image, lets say colors, stretching (in progress) - I haven't figured out how to modify a specific color to be another color, but the tinting parameter definately could be used. As for stretching, use the rectangle object as the positioning and the image will stretch to fit! how to focus on a segment of an image...like only displaying a frame on a film reel (completed) - as per basic animation techniques what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) (todo) Tomorrows Intention Tomorrow I am going to take a stab at rendering a game menu and from there I'm going to investigate how I can improve upon the code and techniques. Intention List: Render a menu, fancy or not Show the mouse cursor Hook up click event A basic animation of somesort Investigate image / menu techniques D.

    Read the article

  • BizTalk Envelopes explained

    - by Robert Kokuti
    Recently I've been trying to get some order into an ESB-BizTalk pub/sub scenario, and decided to wrap the payload into standardized envelopes. I have used envelopes before in a 'light weight' fashion, and I found that they can be quite useful and powerful if used systematically. Here is what I learned. The Theory In my experience, Envelopes are often underutilised in a BizTalk solution, and quite often their full potential is not well understood. Here I try to simplify the theory behind the Envelopes within BizTalk.   Envelopes can be used to attach additional data to the ‘real’ data (payload). This additional data can contain all routing and processing information, and allows treating the business data as a ‘black box’, possibly compressed and/or encrypted etc. The point here is that the infrastructure does not need to know anything about the business data content, just as a post man does not need to know the letter within the envelope. BizTalk has built-in support for envelopes through the XMLDisassembler and XMLAssembler pipeline components (these are part of the XMLReceive and XMLSend default pipelines). These components, among other things, perform the following: XMLDisassembler Extracts the payload from the envelope into the Message Body Copies data from the envelope into the message context, as specified by the property schema(s) associated by the envelope schema. Typically, once the envelope is through the XMLDisassembler, the payload is submitted into the Messagebox, and the rest of the envelope data are copied into the context of the submitted message. The XMLDisassembler uses the Property Schemas, referenced by the Envelope Schema, to determine the name of the promoted Message Context element.   XMLAssembler Wraps the Message Body inside the specified envelope schema Populates the envelope values from the message context, as specified by the property schema(s) associated by the envelope schema. Notice that there are no requirements to use the receiving envelope schema when sending. The sent message can be wrapped within any suitable envelope, regardless whether the message was originally received within an envelope or not. However, by sharing Property Schemas between Envelopes, it is possible to pass values from the incoming envelope to the outgoing envelope via the Message Context. The Practice Creating the Envelope Add a new Schema to the BizTalk project:   Envelopes are defined as schemas, with the <Schema> Envelope property set to Yes, and the root node’s Body XPath property pointing to the node which contains the payload. Typically, you’d create an envelope structure similar to this: Click on the <Schema> node and set the Envelope property to Yes. Then, click on the Envelope node, and set the Body XPath property pointing to the ‘Body’ node:   The ‘Body’ node is a Child Element, and its Data Structure Type is set to xs:anyType.  This allows the Body node to carry any payload data. The XMLReceive pipeline will submit the data found in the Body node, while the XMLSend pipeline will copy the message into the Body node, before sending to the destination. Promoting Properties Once you defined the envelope, you may want to promote the envelope data (anything other than the Body) as Property Fields, in order to preserve their value in the message context. Anything not promoted will be lost when the XMLDisassembler extracts the payload from the Body. Typically, this means you promote everything in the Header node. Property promotion uses associated Property Schemas. These are special BizTalk schemas which have a flat field structure. Property Schemas define the name of the promoted values in the Message Context, by combining the Property Schema’s Namespace and the individual Field names. It is worth being systematic when it comes to naming your schemas, their namespace and type name. A coherent method will make your life easier when it comes to referencing the schemas during development, and managing subscriptions (filters) in BizTalk Administration. I developed a fairly coherent naming convention which I’ll probably share in another article. Because the property schema must be flat, I recommend creating one for each level in the envelope header hierarchy. Property schemas are very useful in passing data between incoming as outgoing envelopes. As I mentioned earlier, in/out envelopes do not have to be the same, but you can use the same property schema when you promote the outgoing envelope fields as you used for the incoming schema.  As you can reference many property schemas for field promotion, you can pick data from a variety of sources when you define your outgoing envelope. For example, the outgoing envelope can carry some of the incoming envelope’s promoted values, plus some values from the standard BizTalk message context, like the AdapterReceiveCompleteTime property from the BizTalk message-tracking properties. The values you promote for the outgoing envelope will be automatically populated from the Message Context by the XMLAssembler pipeline component. Using the Envelope Receiving Enveloped messages are automatically recognized by the XMLReceive pipeline, or any other custom pipeline which includes the XMLDisassembler component. The Body Path node will become the Message Body, while the rest of the envelope values will be added to the Message context, as defined by the Property Shemas referenced by the Envelope Schema. Sending The Send Port’s filter expression can use the promoted properties from the incoming envelope. If you want to enclose the sent message within an envelope, the Send Port XMLAssembler component must be configured with the fully qualified envelope name:   One way of obtaining the fully qualified envelope name is copy it off from the envelope schema property page: The full envelope schema name is constructed as <Name>, <Assembly> The outgoing envelope is populated by the XMLAssembler pipeline component. The Message Body is copied to the specified envelope’s Body Path node, while the rest of the envelope fields are populated from the Message Context, according to the Property Schemas associated with the Envelope Schema. That’s all for now, happy enveloping!

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • C# async and actors

    - by Alex.Davies
    If you read my last post about async, you might be wondering what drove me to write such odd code in the first place. The short answer is that .NET Demon is written using NAct Actors. Actors are an old idea, which I believe deserve a renaissance under C# 5. The idea is to isolate each stateful object so that only one thread has access to its state at any point in time. That much should be familiar, it's equivalent to traditional lock-based synchronization. The different part is that actors pass "messages" to each other rather than calling a method and waiting for it to return. By doing that, each thread can only ever be holding one lock. This completely eliminates deadlocks, my least favourite concurrency problem. Most people who use actors take this quite literally, and there are plenty of frameworks which help you to create message classes and loops which can receive the messages, inspect what type of message they are, and process them accordingly. But I write C# for a reason. Do I really have to choose between using actors and everything I love about object orientation in C#? Type safety Interfaces Inheritance Generics As it turns out, no. You don't need to choose between messages and method calls. A method call makes a perfectly good message, as long as you don't wait for it to return. This is where asynchonous methods come in. I have used NAct for a while to wrap my objects in a proxy layer. As long as I followed the rule that methods must always return void, NAct queued up the call for later, and immediately released my thread. When I needed to get information out of other actors, I could use EventHandlers and callbacks (continuation passing style, for any CS geeks reading), and NAct would call me back in my isolated thread without blocking the actor that raised the event. Using callbacks looks horrible though. To remind you: m_BuildControl.FilterEnabledForBuilding(    projects,    enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(        enabledProjects,             newDirtyProjects =             {                 ....... Which is why I'm really happy that NAct now supports async methods. Now, methods are allowed to return Task rather than just void. I can await those methods, and C# 5 will turn the rest of my method into a continuation for me. NAct will run the other method in the other actor's context, but will make sure that when my method resumes, we're back in my context. Neither actor was ever blocked waiting for the other one. Apart from when they were actually busy doing something, they were responsive to concurrent messages from other sources. To be fair, you could use async methods with lock statements to achieve exactly the same thing, but it's ugly. Here's a realistic example of an object that has a queue of data that gets passed to another object to be processed: class QueueProcessor {    private readonly ItemProcessor m_ItemProcessor = ...     private readonly object m_Sync = new object();    private Queue<object> m_DataQueue = ...    private List<object> m_Results = ...     public async Task ProcessOne() {         object data = null;         lock (m_Sync)         {             data = m_DataQueue.Dequeue();         }         var processedData = await m_ItemProcessor.ProcessData(data); lock (m_Sync)         {             m_Results.Add(processedData);         }     } } We needed to write two lock blocks, one to get the data to process, one to store the result. The worrying part is how easily we could have forgotten one of the locks. Compare that to the version using NAct: class QueueProcessorActor : IActor { private readonly ItemProcessor m_ItemProcessor = ... private Queue<object> m_DataQueue = ... private List<object> m_Results = ... public async Task ProcessOne()     {         // We are an actor, it's always thread-safe to access our private fields         var data = m_DataQueue.Dequeue();         var processedData = await m_ItemProcessor.ProcessData(data);         m_Results.Add(processedData);     } } You don't have to explicitly lock anywhere, NAct ensures that your code will only ever run on one thread, because it's an actor. Either way, async is definitely better than traditional synchronous code. Here's a diagram of what a typical synchronous implementation might do: The left side shows what is running on the thread that has the lock required to access the QueueProcessor's data. The red section is where that lock is held, but doesn't need to be. Contrast that with the async version we wrote above: Here, the lock is released in the middle. The QueueProcessor is free to do something else. Most importantly, even if the ItemProcessor sometimes calls the QueueProcessor, they can never deadlock waiting for each other. So I thoroughly recommend you use async for all code that has to wait a while for things. And if you find yourself writing lots of lock statements, think about using actors as well. Using actors and async together really takes the misery out of concurrent programming.

    Read the article

  • While installing updates I get "Package operation Failed" for ubuntu 12.04

    - by user54395
    i get the following response in the details:- Please help nstallArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 427340 files and directories currently installed.) Preparing to replace thunderbird-trunk-globalmenu 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk-globalmenu_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk-globalmenu ... Preparing to replace thunderbird-trunk 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5286 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up thunderbird-trunk (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Setting up thunderbird-trunk-globalmenu (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Errors were encountered while processing: crossplatformui Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5541 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • How to Play PC Games on Your TV

    - by Chris Hoffman
    No need to wait for Valve’s Steam Machines — connect your Windows gaming PC to your TV and use powerful PC graphics in the living room today. It’s easy — you don’t need any unusual hardware or special software. This is ideal if you’re already a PC gamer who wants to play your games on a larger screen. It’s also convenient if you want to play multiplayer PC games with controllers in your living rom. HDMI Cables and Controllers You’ll need an HDMI cable to connect your PC to your television. This requires a TV with HDMI-in, a PC with HDMI-out, and an HDMI cable. Modern TVs and PCs have had HDMI built in for years, so you should already be good to go. If you don’t have a spare HDMI cable lying around, you may have to buy one or repurpose one of your existing HDMI cables. Just don’t buy the expensive HDMI cables — even a cheap HDMI cable will work just as well as a more expensive one. Plug one end of the HDMI cable into the HDMI-out port on your PC and one end into the HDMI-In port on your TV. Switch your TV’s input to the appropriate HDMI port and you’ll see your PC’s desktop appear on your TV.  Your TV becomes just another external monitor. If you have your TV and PC far away from each other in different rooms, this won’t work. If you have a reasonably powerful laptop, you can just plug that into your TV — or you can unplug your desktop PC and hook it up next to your TV. Now you’ll just need an input device. You probably don’t want to sit directly in front of your TV with a wired keyboard and mouse! A wireless keyboard and wireless mouse can be convenient and may be ideal for some games. However, you’ll probably want a game controller like console players use. Better yet, get multiple game controllers so you can play local-multiplayer PC games with other people. The Xbox 360 controller is the ideal controller for PC gaming. Windows supports these controllers natively, and many PC games are designed specifically for these controllers. Note that Xbox One controllers aren’t yet supported on Windows because Microsoft hasn’t released drivers for them. Yes, you could use a third-party controller or go through the process of pairing a PlayStation controller with your PC using unofficial tools, but it’s better to get an Xbox 360 controller. Just plug one or more Xbox controllers into your PC’s USB ports and they’ll work without any setup required. While many PC games to support controllers, bear in mind that some games require a keyboard and mouse. A TV-Optimized Interface Use Steam’s Big Picture interface to more easily browse and launch games. This interface was designed for using on a television with controllers and even has an integrated web browser you can use with your controller. It will be used on the Valve’s Steam Machine consoles as the default TV interface. You can use a mouse with it too, of course. There’s also nothing stopping you from just using your Windows desktop with a mouse and keyboard — aside from how inconvenient it will be. To launch Big Picture Mode, open Steam and click the Big Picture button at the top-right corner of your screen. You can also press the glowing Xbox logo button in the middle of an Xbox 360 Controller to launch the Big Picture interface if Steam is open. Another Option: In-Home Streaming If you want to leave your PC in one room of your home and play PC games on a TV in a different room, you can consider using local streaming to stream games over your home network from your gaming PC to your television. Bear in mind that the game won’t be as smooth and responsive as it would if you were sitting in front of your PC. You’ll also need a modern router with fast wireless network speeds to keep up with the game streaming. Steam’s built-in In-Home Streaming feature is now available to everyone. You could plug a laptop with less-powerful graphics hardware into your TV and use it to stream games from your powerful desktop gaming rig. You could also use an older desktop PC you have lying around. To stream a game, log into Steam on your gaming PC and log into Steam with the same account on another computer on your home network. You’ll be able to view the library of installed games on your other PC and start streaming them. NVIDIA also has their own GameStream solution that allows you to stream games from a PC with powerful NVIDIA graphics hardware. However, you’ll need an NVIDIA Shield handheld gaming console to do this. At the moment, NVIDIA’s game streaming solution can only stream to the NVIDIA Shield. However, the NVIDIA Shield device can be connected to your TV so you can play that streaming game on your TV. Valve’s Steam Machines are supposed to bring PC gaming to the living room and they’ll do it using HDMI cables, a custom Steam controller, the Big Picture interface, and in-home streaming for compatibility with Windows games. You can do all of this yourself today — you’ll just need an Xbox 360 controller instead of the not-yet-released Steam controller. Image Credit: Marco Arment on Flickr, William Hook on Flickr, Lewis Dowling on Flickr

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • Top ten things that don't make sense in The Walking Dead

    - by iamjames
    For those of you that don't know, The Walking Dead is a popular American TV show on AMC about a group of people trying to survive in a zombie-filled world.Here's the top ten eleven things that don't make sense on the show (and have never been explained) 1)  They never visit stores.  No Walmarts, Kmarts, Targets, shopping malls, pawn shops, gas stations, etc.  You'd think that would be the first place you'd visit for supplies, but they never have.  Not once.  There was a tiny corner store they visited in a small town, and while many products were already gone they did find several useful items.  2)  They never raid houses.  Why not?  One would imagine that they would want to search houses for useful items, but they don't.3)  They don't use 2 way radios.  Modern 2-way radios have a 36-mile range.  That's probably best possible range, but even if the range is only 10% of that, 3.6 miles, that's still more than enough for most situations, for the occasional "hey zombies attacking can you give me a hand?" or "there's zombies walking by stay inside until they leave" or "remember to pick up milk at the store love mom".  And yes they would need batteries or recharging, but they have been using gas-powered generators on the show and I'm sure a car charger would work.4)  They use gas-guzzling vehicles.  Every vehicle they have is from the 80s or 90s except for the new Kia SUV there for product placement.  Why?  They should all be driving new small SUVs or hybrids.  Visit a dealership and steal more fuel-efficient vehicles, because while the Walmart's might be empty from people raiding them for supplies, I'm sure most people weren't thinking "Gee, I should go car shopping" when the infection hit5)  They drive a motorcycle.  Seriously?  Let's find the least protective vehicle and drive that.  And while motorcycles get reasonable gas mileage, 5 people in a SUV gets better gas mileage per person than 5 people all driving motorcycles so it doesn't make economical sense either.6)  They drive loud vehicles.  The motorcycle used is commonly referred to as a chopper and is about as loud as a motorcycle can get.  The zombies are attracted to loud noise, so wouldn't it make more sense to drive vehicles that makes less sound?  Because as soon as you stop the bike and get off you're surrounded by zombies that heard you coming.  And it's not just the bike, the ~1980s Chevy SUV in the show is also very loud.7)  They never run out of food.  Seems like that would be a almost daily struggle, keeping enough food available for about a dozen people, yet I've never seen them visit a grocery store or local convenience store to stock up.8)  They don't carry swords, machetes, clubs, etc.  Let's face it, biting is not a very effective means of attack.  It's good for animals because they have fangs and little else, but humans have been finding better ways of killing each other since forever.  So why doesn't everyone on the show carry a sword or machete or at least a baseball bat?  Anything is better than wasting valuable bullets all the time.  Sure, dozen zombies approaching?  Shoot them.  One zombie approaching?  Save the bullet, cut off it's head.  9)  They do not wear protective clothing.  Human teeth are not exactly the sharpest teeth in the animal kingdom.  The leather shoes your dog ripped to shreds within minutes would probably take you days to bite through.  So why do they walk around half-naked?  Yes I know it's hot in Atlanta, but you'd think they'd at least have some tough leather coats or something for protection.  Maybe put a few small vent holes in the fabric if it's really hot.  Or better:  make your own chainmail.  Chainmail was used for thousands of years for protection from swords and is still used by scuba divers for protection from sharks.  If swords and sharks can't puncture it, human teeth don't stand a chance.  10)  They don't build barricades or dig trenches around properties.  In Season 2 they stayed at a farm in the middle of no where.  While being far away from people is a great way to stay far away from zombies, it would still make sense to build some sort of defenses.  Hordes of zombies would knock down almost any fence, but what about a trench or moat?  Maybe something not too wide so it can be jumped over easily but a zombie would fall into because I haven't seen too many jumping zombies on the show.  11)  They don't live in a mall or tall office building.  A mall would be perfect.  They have large security gates designed to keep even hundreds of people from breaking in and offer lots of supplies and food.  They're usually hundreds of thousands of square feet and fully enclosed, one could probably live their entire life happily in a mall.  Tall office building with on-site cafeteria would be another good choice.  They also usually offer good security and office furniture could be pushed out of the windows to crush approaching zombies, and the cafeteria is usually stocked to provide food for hundreds or thousands of office workers so food wouldn't be a problem for a long time. So there you have it, eleven things that don't make sense in The Walking Dead.  Have any of your own you'd like to add or were one of these things covered in the show?  Let me know in the comments.

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • How to Customize Fonts and Colors for Gnome Panels in Ubuntu Linux

    - by The Geek
    Earlier this week we showed you how to make the Gnome Panels totally transparent, but you really need some customized fonts and colors to make the effect work better. Here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Changing the Gnome Colors the Easy Way You’ll first need to install Gnome Color Chooser which is available in the default repositories (the package name is gnome-color-chooser). Then go to System > Preferences > Gnome Color Chooser to launch the program. When you see all these tabs you immediately know that Gnome Color Chooser does not only change the font color of the panel, but also the color of the fonts all over Ubuntu, desktop icons, and many other things as well. Now switch to the panel tab, here you can control every thing about your panels. You can change font, font color, background and background color of the panels and start menus. Tick the “Normal” option and choose the color you want for the panel font. If you want you can change the hover color of the buttons on the panel by too. A little below the color option is the font options, this includes the font, font size, and the X and Y positioning of the font. The first two options are pretty straight forward, they change the typeface and the size. The X-Padding and Y-Padding may confuse you but they are interesting, they may give a nice look for your panels by increasing the space between items on your panel like this: X-Padding:   Y-Padding:   The bottom half of the window controls the look of your start menus which is the Applications, Places, and Systems menus. You can customize them just the way you did with the panel.   Alright, this was the easy way to change the font of your panels. Changing the Gnome Theme Colors the Command-Line Way The other hard (not so hard really) way will be changing the configuration files that tell your panel how it should look like. In your Home Folder, press Ctrl+H to show the hidden files, now find the file “.gtkrc-2.0”, open it and insert this line in it. If there are any other lines in the file leave them intact. include “/home/<username>/.gnome2/panel-fontrc” Don’t forget to replace the <user_name> with you user account name. When done close and save the file. Now navigate the folder “.gnome2” from your Home Folder and create a new file and name it “panel-fontrc”. Open the file you just created with a text editor and paste the following in it: style “my_color”{fg[NORMAL] = “#FF0000”}widget “*PanelWidget*” style “my_color”widget “*PanelApplet*” style “my_color” This text will make the font red. If you want other colors you’ll need to replace the Hex value/HTML Notation (in this case #FF0000) with the value of the color you want. To get the hex value you can use GIMP, Gcolor2 witch is available in the default repositories or you can right-click on your panel > Properties > Background tab then click to choose the color you want and copy the Hex value. Don’t change any other thing in the text. When done, save and close. Now press Alt+F2 and enter “killall gnome-panel” to force it to restart or you can log out and login again. Most of you will prefer the first way of changing the font and color for it’s ease of applying and because it gives you much more options but, some may not have the ability/will to download and install a new program on their machine or have reasons of their own for not to using it, that’s why we provided the two way. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >