Search Results

Search found 10196 results on 408 pages for 'article city'.

Page 383/408 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • CodePlex Daily Summary for Friday, May 21, 2010

    CodePlex Daily Summary for Friday, May 21, 2010New Projects.Net wrapper around the Neo4j Rest Server: Neo4jRestSharp is a .Net API wrapper for the Neo4j Rest Server. Neo4j is an open sourced java based transactional graph database that stores data ...3D Editor Application Framework: A starting point for building 3D editing applications, such as video game editors, particle system editors, 3D modelling tools, visualization tools...Bulk Actions for SharePoint: This project aims to provide some essential and generic bulk actions for SharePoint lists. Idea is to include any custom actions that can be applie...CineRemote - The hometheater control board: CineRemote's purpose is to offer an alternative to expensive control system for dedicated hometheater rooms. CrmContrib: CrmContrib is a collection of useful items for developers and customizers working with the Dynamics CRM platform.db2xls: OleDb,Sql Server,Sqlite,....to excel, from sqlHappyNet - Silverlight reference application: HappyNet is a project using best practices to build an e-commerce web site. It is a full Silverlight application based on a solid architecture (PR...IP Multicast Library: IP Multicast Library makes it easier for developers to add Multicast, messaging to projects.Linkbutton Web Part: This Link Button Web Part can be installed in any SharePoint 2007 web site. You can onfigure a URL with query string that will be used by the Link...Majordomus pro Windows: Nástroj určený pro správce a vývojáře slouží k řízenému spuštění používaných a vypnutí nepotřebných služeb, procesů a aplikací ve Windows. Pomocí s...MRDS Samples: The MRDS Samples site hosts a variety of code samples for Microsoft Robotics Developer Studio (RDS).Mute4: Mute4 is a simple application that allows you to set a mute/vibration profile and it will switch back to your normal profile automatically after a ...Niko Neko Pureya: Niko Neko Pureya is a media player designed for people who watches a series of videos (like anime). It is very simple and easy to use & learn. And ...NVPX - VP8 Video Codec for .Net: NVPx allows you to use the now open-source VP8 codec on the .Net platform.openrs: openrs is an open-source RuneScape 2 emulator designed to be used with newer engine clients.Prism Evaluation: prism evaluationProj4Net: Proj4Net is a C#/.Net library to transform point coordinates from one geographic coordinate system to another, including datum transformation. The ...Read it to me!: Read it to me will allow you to load txt and rtf files and then speak them using SAPI 5 voices that are installed on your computer with an option t...sGSHOPedit: -SilverDice: SilverDice...SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight contains a collection of silverlight controls making life easier for developers. You'll no longer have to worry ...Silverlight Report: Open-Source Silverlight Reporting Engine. This project allows you to create and print reports using Silverlight 4.SimTrain5000: Train simulation project on University College of Northern Denmark.Springshield Sample Site for EPiServer CMS: City of Springshield - The accessible sample site for EPiServer CMS 6.Teach.Net: Teach.Net is a library/framework that can be used to create applications for testing and learning.The Amoeba Project: The Amoeba Project is a platform to be developed to embrace most of the latest Microsoft Technologies. Still in a conceptual stage however, it loo...The Fastcopy Helper: The Fastcopy Helper is a auxiliary tool for fastcopy.vow: vowWCF Client Generator: This code generator avoids the shortcomings of svcutil when generating proxies for services with a large number of methods.WebCycle: WebCycle is a screensaver application that cycles through web pages. This was originally created to cycle through Reporting Services reports so th...XGate2D - XNA 2D Game Engine: XGate2D is 2D game engine built using XNA Framework. XGate2D currently has 8 features: input handler, animation, Graphical User Interface (GUI), ...XNA Catapult Minigame for XNA 4: XNA 4 implementation of the Catapult Minigame Sample from XNA Creators Club.New ReleasesADefHelpDesk: ADefHelpDesk (Standard ASP.NET Version) 01.00.00: ADefHelpDesk a Help Desk / Ticket Tracker module * NOTE: This version is NOT a DotNetNuke module - It is a standard ASP.NET Application * SQL 2005...Bulk Actions for SharePoint: First Release: First Release - Includes following bulk list actions: *Delete *Checkin/Checkout *Publish/Unpublish *Move *Update MetadataCheck-in Wizard for ArenaChMS: v1.2.1: v 1.2.0 updated to work with Arena 2009.2 (see notes below). Added support for "At Kiosk" and "At Location" printing. Added support for print l...ConfigTray: 1.5: Version 1.5 will have a new UI for managing ConfigTray config. Instead of manually editing configtray.exe.config to add/delete/edit settings and fi...CrmContrib: CrmContribWorkflow 1.0 ALPHA1: This is an initial release of the CrmContribWorkflow 1.0 components. At the moment there are only two activities included in this release. Add Cont...DemotToolkit: DemotToolkit-0.1.0.50830: Initial release.DemotToolkit: DemotToolkit-0.1.1.51107: Fixed crashing in some circumstances.Dot Game: Dot Game Stable Release: Dot Game This is latest stable release without network play mode. (Network play mode is under development)Dynamic Survey Forms - SharePoint Web Part: Fix for missing dlls and documentation: Added missing assemblies to setup.zip. Installation instructions.EnhSim: V1.9.8.7: Added Sharpened Twilight ScaleEvent Scavenger: Viewer 3.2.2: Fixed a bug in the viewer where the previous view 'Top x' filter was not restored after the application was reopened.F# Project Extender: V0.9.2.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. Fixed bugs: -VS2010 crash on MoveUp(MoveDown) of renamed file -Adding files brea...FlickrNet API Library: 3.0 Beta 2: The final Beta for the 3.0 release. Fixes a major issue with Photosets.GetList as well as a number of smaller bugs, and adds the new Usage extras ...Folder Bookmarks: Folder Bookmarks 1.5.7: The latest version of Folder Bookmarks (1.5.7), with the new Help feature - all the instructions needed to use the software (If you have any sugges...Linkbutton Web Part: V1.1: Use WinZip to unzip. See docs folder for installation instructions.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Final: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 . (Version 46127) Getting StartedInfo about installation ...MEFedMVVM: MEFedMVVM: This version contains the MEFedMVVM ViewModelLocator and also some basic services such as Mediator and StateManager. You can download the code fr...Mentor Text Database: May 2010 Release with instrumentation: This should function the same as the previous version. Some enhancements have been made, and additional instrumentation has been added to help anal...Merthin: SSF 2010: Code and documentation presented at the Student Science Fair of the Faculty of Mathematics and Computer Science at the University of Habana. The ma...NB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store_02.01.00: NB_Store v2.1.0 THIS IS AN ALPHA RELEASE FOR TESTING ONLY......DO NOT USE IT ON A LIVE SYSTEM.NerdDinner.com - Where Geeks Eat: NerdDinner - Four Database Access Samples: Chris Sells worked with Nick Muhonen from Useable Concepts and Nick created four samples exploring how an ASP.NET MVC application can access databa...openrs: Devstart: Trunk release, empty project.Over Store: OverStore 1.19.0.0: - Version number is increased. - Add methods for specifying custom callback methods to TableMappingRepositoryConfiguration. - Object attaching fu...Rnwood.SmtpServer: Rnwood.SmtpServer 2.0: SmtpServer 2.0 is a .NET SMTP server component written in pure c#. It was written to power http://smtp4dev.codeplex.com/ but can easily be used by ...Scrum Sprint Monitor: v1.0.0.48524 (.NET 4-TFS 2010): What is new in this release? #6132 - Bug with open work hours; Added untested support for MSF for Agile process template; Improved data reporti...SharePoint Rsync List: 1.0.0.0: This initial 1.0 release includes a new feature which manages timer jobs on your sync listShould: Beta 1.1: Updated the namespaces. The extension methods are now in the root Should namespace. The other classes are not in child namespaces.SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight: Kindly give your comments about this project and tell how you feel about it. I'm still new in creating controls, hopefully you guys can support me....Silverlight Report: SilverlightReport_v0.1_alpha_bin: SilverlightReport v0.1 alphaSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.2.0: Fixed a problem with long referenced DetectionResults that might have caused an IndexOutOfRangeException Added Marker.LoadFromResource to get rid...The Fastcopy Helper: My Fastcopy Helper 1.0: This Source Code Is use a method to run it . The method is thinked by my bain. So , The Performance maybe lower.Thinktecture.DataObjectModel: Thinktecture.DataObjectModel v0.12: Some bugs fixed. See ChangeLog.txt for more infos.Umbraco CMS: Umbraco 4.0.4.1: A stability release fixing 13 issues based on feedback from 4.0.3 users. Most importantly is a fix to a serious date bug where day and month could ...Usa*Usa Libraly: Smart.Web.Mobile ver 0.2: Smart.Web.Mobile pictgram convert library for japanese galapagos k-tai( ゚д゚) ver 0.2. - Custom encoding for HttpRequest.ContentEncoding / HttpResp...VCC: Latest build, v2.1.30520.0: Automatic drop of latest buildvow: dream: I have a dreamvow: test: testWCF Client Generator: Version 0.9.1.42927: Initial feature set complete. Detailed UI pending.WebCycle: WebCycle 1.0.20: Initial CodePlex releaseWebCycle: WebCycle 1.0.21: Added Uri validataion before saving settingsWhois Application: 1.5 release: - uses the whois.iana.org to dynamically lookup the whois server for each top level domain - enables enter key press for searchWing Beats: Wing Beats 0.9: This first release is focused on the core functionality and XHTML 1.0 strict generation in Asp.NET MVC.Most Popular ProjectsWeb Service Software FactoryPlasmaAquisição de Sinais Vitais em Tempo Real (Vital signs realtime data acquisition)Octtree XNA-GS DrawableGameComponentRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Most Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationPHPExcelBlogEngine.NETSQL Server PowerShell ExtensionsCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices: Windows Azure Security GuidanceFluent Ribbon Control Suite

    Read the article

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by Brian Dayton
                    I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: -          At Project Kick-off: Meeting and understanding team member's histories, goals & roles -          Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: -          My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home -          One of my co-workers started conference calls at 6:30 this morning -          My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future.   I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: -          I shared an office with one of them -          I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.        

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to Collect Debug Info for Oracle SQL Developer

    - by thatjeffsmith
    In a perfect world, there would be no software bugs. Developers would always test their code. QA would find any scenarios and bugs the developers hadn’t already thought of. Regression tests would be complete and flawless. But alas, we can only afford to pay mere humans here, so we will have bugs from time to time. Or sometimes you are trying to do something the software wasn’t designed for, or perhaps your machine has exhausted it’s resources trying to build the un-buildable. When you run into problems, you will need help. Developers need your help so they can help you. Surprisingly enough, feedback like this isn’t very helpful: Your program isn’t working. How can I make it work? When you are ready to work with us on the SQL Developer OTN forum, you will most likely be asked to run SQL Developer and capture the output from the command console. In case you need help with this, ere’s a step-by-step process you can follow in Windows 7 (should work in XP too.) Open a windows command window Start – Run – CMD Once it’s open, click on the window icon and select ‘Defaults.’ Change the default buffer size to be something bigger, much bigger. Set the CMD window default buffer size HIGHER Note: you only need to do this once. Navigate to your SQL Developer Installation Folder Instead of running the ‘sqldeveloper.exe’ file in the root directory, we are going to go several sub-directories down. Find the ‘bin’ sub-directory and run the ‘sqldeveloper.exe’ there. When you do this, a CMD window will open, and then you’ll see the SQL Developer application load. The SQL Developer bin directory - run the tool from here and get a logging window Use SQL Developer as normal, until it ‘breaks’ or ‘hangs’ Now, you are ready to grab the nitty-gritty information that MIGHT tell the developer what is going wrong or happening in your scenario. Click back into the CMD window Send a Ctrl+Break or a Ctrl+Pause. If you on a newer laptop that doesn’t have this key, be sure to check the ‘Fn’ subset of keys. If you need to map the BREAK or PAUSE buttons, this article might help. You can also try the on-screen keyboard in windows – just type ‘OSK’ in your START – RUN prompt. Copy the logging information from the command window – all of it We need this information, help us get it! Open a case with Oracle Support or Start a Thread on the Forums Or email me. If you’re on my blog reading this, it’s the least I can do to help Now, before you hit ‘Send’ or ‘Post’ or ‘Submit’ – be sure to add a brief description of what you were doing in the application when you ran into the problem. Even if you were doing ‘nothing,’ let us know how many connections you had open, what windows were active, etc. The more you can tell us, the higher your odds go up to getting a quick fix or at least an answer as to what is happening. Also include the following information: The version of SQL Developer you are running The version of the JDK you are using The OS you are using The version of Oracle you are connected to Now, don’t be surprised if you get asked to upgrade to a supported configuration, say ‘version 3.1 and the 1.6 JDK.’ Supporting older versions of software is fun, and while we enjoy a challenge, it may be easier for you to upgrade your way out of the problem at hand.

    Read the article

  • Create a Persistent Bootable Ubuntu USB Flash Drive

    - by Trevor Bekolay
    Don’t feel like reinstalling an antivirus program every time you boot up your Ubuntu flash drive? We’ll show you how to create a bootable Ubuntu flash drive that will remember your settings, installed programs, and more! Previously, we showed you how to create a bootable Ubuntu flash drive that would reset to its initial state every time you booted it up. This is great if you’re worried about messing something up, and want to start fresh every time you start tinkering with Ubuntu. However, if you’re using the Ubuntu flash drive to diagnose and solve problems with your PC, you might find that a lot of problems require guess-and-test cycles. It would be great if the settings you change in Ubuntu and the programs you install stay installed the next time you boot it up. Fortunately, Universal USB Installer, a great little program from Pen Drive Linux, can do just that! Note: You will need a USB drive at least 2 GB large. Make sure you back up any files on the flash drive because this process will format the drive, removing any files currently on it. Once Ubuntu has been installed on the flash drive, you can move those files back if there is enough space. Put Ubuntu on your flash drive Universal-USB-Installer.exe does not need to be installed, so just double click on it to run it wherever you downloaded it. Click Yes if you get a UAC prompt, and you will be greeted with this window. Click I Agree. In the drop-down box on the next screen, select Ubuntu 9.10 Desktop i386. Don’t worry if you normally use 64-bit operating systems – the 32-bit version of Ubuntu 9.10 will still work fine. Some useful tools do not have 64-bit versions, so unless you’re planning on switching to Ubuntu permanently, the 32-bit version will work best. If you don’t have a copy of the Ubuntu 9.10 CD downloaded, then click on the checkbox to Download the ISO. You’ll be prompted to launch a web browser; click Yes. The download should start immediately. When it’s finished, return the the Universal USB Installer and click on Browse to navigate to the ISO file you just downloaded. Click OK and the text field will be populated with the path to the ISO file. Select the drive letter that corresponds to the flash drive that you would like to use from the dropdown box. If you’ve backed up the files on this drive, we recommend checking the box to format the drive. Finally, you have to choose how much space you would like to set aside for the settings and programs that will be stored on the flash drive. Considering that Ubuntu itself only takes up around 700 MB, 1 GB should be plenty, but we’re choosing 2 GB in this example because we have lots of space on this USB drive. Click on the Create button and then make yourself a sandwich – it will take some time to install no matter how fast your PC is. Eventually it will finish. Click Close. Now you have a flash drive that will boot into a fully capable Ubuntu installation, and any changes you make will persist the next time you boot it up! Boot into Ubuntu If you’re not sure how to set your computer to boot using the USB drive, then check out the How to Boot Into Ubuntu section of our previous article on creating bootable USB drives, or refer to your motherboard’s manual. Once your computer is set to boot using the USB drive, you’ll be greeted with splash screen with some options. Press Enter to boot into Ubuntu. The first time you do this, it may take some time to boot up. Fortunately, we’ve found that the process speeds up on subsequent boots. You’ll be greeted with the Ubuntu desktop. Now, if you change settings like the desktop resolution, or install a program, those changes will be permanently stored on the USB drive! We installed avast! Antivirus, and on the next boot, found that it was still in the Accessories menu where we left it. Conclusion We think that a bootable Ubuntu USB flash drive is a great tool to have around in case your PC has problems booting otherwise. By having the changes you make persist, you can customize your Ubuntu installation to be the ultimate computer repair toolkit! Download Universal USB Installer from Pen Drive Linux Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayCreate a Bootable Ubuntu 9.10 USB Flash DriveReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Book Review Charlene Li's New Book: Open Leadership

    - by david.talamelli
    A few weeks ago, I was surprised when I looked in our mail box. I had received an Advance Copy of Charlene Li's new book titled "Open Leadership: How Social Technology Can Transform the Way You Lead". Charlene sent a tweet a while back asking anyone interested in receiving the book to submit their details. I sent off my details and didn't think I would hear anything back, so it was a pleasant surprise. With that I almost feel bad that it has taken me 3 weeks to read her book. It took this long mainly because it has been hard to fit in some quality reading time for myself with work, the kids, volunteering, etc..... I am happy to report I have finished her book and wanted to run through my initial thoughts with you. I first came across Charlene Li after reading her book "Groundswell" a few years ago, her latest book "Open Leadership" is a follow on from Groundswell and to me it seems like a natural progression from the question "Ok the business landscape is changing, what do we do now?" For me these two books have a different writing style to them. Groundswell from memory spoke about broad social media concepts and adoption and alerted us to some of the changes taking place in the SM landscape. Open Leadership seems to be focussed on taking those broad concepts and finding ways to implement them into your environment. That is breaking broad concepts down into individual action items that can be measured and analysed. As the business world changes Leaders must change their approach and let go of control to more control. One of the things I love reading about is seeing real life examples of how people and organisations are making these things happen. In this book Charlene has collected some great collateral and case studies from companies such as Cisco, Best Buy, The Red Cross and The State Bank of India (as a side-note, I wish now that I submitted my input for the Leaders I work with here at Oracle - there are some great examples here of people who empower their staff). As society becomes more adept at using social media it is inevitable that Leaders must become open with their employees, clients and partners. From the book some of the key points I took away are (I actually took away a lot more from this book, this is just an overview) : 1) Organisations should encourage risk taking. Without being a "hacker", how can we improve ourselves, our processes, our business, etc... The old saying you only fail by not trying applies here. If Leaders create a culture where people are afraid to stick their neck out - how will you innovate? 2) Leaders need to lead by example - if you want to promote an open and transparent business, a Leader needs to exemplify the traits they would like to see out of their employees. 3) The definition of a Leader is changing, open leadership is about being a catalyst to change that uses networks to spread a vision as opposed to traditional leadership that is viewed as a role. 4) There is a cultural and business shift taking place. Information is more wide-spread and is being disseminated faster than any other time in the past. Leaders who are open and transparent will thrive in this new business environment. 5) Leadership is not defined by a title - it is defined by a person's actions. Also anyone can be a Leader or has Leadership potential in them- it is a matter of drawing that out of people. I found this book useful and I also found myself looking at my own actions and the actions of others around me (including my management) to see how open and transparent I am in my work. For me I am glad I read this book as it validated my own thoughts of the changes we are seeing take place. This book has certainly given me some new ideas and helped me push my own boundaries of what I can do. The book has a number of action plans at the end of some of the chapters such as "Conducting you Openness Audit" that I think have helped me take thoughts and ideas and turn them into concrete action items. I have included a link to the introduction of the book here if anyone wants to have a read of it. If anyone else has read this book, it would be great to hear your thoughts/comments/review. Leave your comments below. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • RSS Feeds currently on Simple-Talk

    - by Andrew Clarke
    There are a number of news-feeds for the Simple-Talk site, but for some reason they are well hidden. Whilst we set about reorganizing them, I thought it would be a good idea to list some of the more important ones. The most important one for almost all purposes is the Homepage RSS feed which represents the blogs and articles that are placed on the homepage. Main Site Feed representing the Homepage ..which is good for most purposes but won't always have all the blogs, or maybe it will occasionally miss an article. If you aren't interested in all the content, you can just use the RSS feeds that are more relevant to your interests. (We'll be increasing these categories soon) The newsfeed for SQL articles The .NET section newsfeed The newsfeed for Red Gate books The newsfeed for Opinion articles The SysAdmin section newsfeed if you want to get a more refined feed, then you can pick and choose from these feeds for each category so as to make up your custom news-feed in the SQL section, SQL Training Learn SQL Server Database Administration TSQL Programming SQL Server Performance Backup and Recovery SQL Tools SSIS SSRS (Reporting Services) in .NET there are... ASP.NET Windows Forms .NET Framework ,NET Performance Visual Studio .NET tools in Sysadmin there are Exchange General Virtualisation Unified Messaging Powershell in opinion, there is... Geek of the Week Opinion Pieces in Books, there is .NET Books SQL Books SysAdmin Books And all the blogs have got feeds. So although you can get all the blogs from here.. Main Blog Feed          You can get individual RSS feeds.. AdamRG's Blog       Alex.Davies's Blog       AliceE's Blog       Andrew Clarke's Blog       Andrew Hunter's Blog       Bart Read's Blog       Ben Adderson's Blog       BobCram's Blog       bradmcgehee's Blog       Brian Donahue's Blog       Charles Brown's Blog       Chris Massey's Blog       CliveT's Blog       Damon's Blog       David Atkinson's Blog       David Connell's Blog       Dr Dionysus's Blog       drsql's Blog       FatherJack's Blog       Flibble's Blog       Gareth Marlow's Blog       Helen Joyce's Blog       James's Blog       Jason Crease's Blog       John Magnabosco's Blog       Laila's Blog       Lionel's Blog       Matt Lee's Blog       mikef's Blog       Neil Davidson's Blog       Nigel Morse's Blog       Phil Factor's Blog       red@work's Blog       reka.burmeister's Blog       Richard Mitchell's Blog       RobbieT's Blog       RobertChipperfield's Blog       Rodney's Blog       Roger Hart's Blog       Simon Cooper's Blog       Simon Galbraith's Blog       TheFutureOfMonitoring's Blog       Tim Ford's Blog       Tom Crossman's Blog       Tony Davis's Blog       As well as these blogs, you also have the forums.... SQL Server for Beginners Forum     Programming SQL Server Forum    Administering SQL Server Forum    .NET framework Forum    .Windows Forms Forum   ASP.NET Forum   ADO.NET Forum 

    Read the article

  • CodePlex Daily Summary for Thursday, March 04, 2010

    CodePlex Daily Summary for Thursday, March 04, 2010New ProjectsAcPrac: A educational program designed to run on Windows. It is fully customizable. It is developed in C# 2008.argo: Linguistic Tool Camelot: A simple utility for testing cross site data queries in SharePointdelta: Delta is a difference tool for large files. Delta will have several clients, including AJAX and Silverlight. You can view differences in a scroll...EF Dorsal: A dorsal spine for Entity Framework based project. This code generator provides a powerfull Business Layer with almost all high important best p...Excel formatting: Excel formatting projectEyes Recognition: eye recognitionGameOfLife: The Game of Life is a the best example of a cellular automaton. It's developed in C#, SilverLight.GKO Libraries: .NET tool libraries written in C# that we have used in our projectsHello Demo: A simple Hello World application, used to demonstrate access to CodePlex using Team Explorerjog.Portal: jogportal lays the foundation for an extensible portal solution leveraging the latest technology including linq and silver lightKM Brasil: k-meleon, km, gecko, brasil, browser, web, web browser, navegador, navegação, kmeleon, k meleonLost in Translation: Ever find yourself in a foreign country eager (or clueless) to what is written on a shop sign or restaurant menu? With your trusty phone, its came...Machine Learning for .NET: Machine Learning Library for .NET. Initial inclusions will be binary and multi-class classification as well as standard clustering algorithms.Maito: Iron Kingdoms Name GeneratorManPowerEngine: ManPowerEngine is a Silverlight 2D Game Engine. It has an game application framework and supports game object hierarchy management, 2D physics simu...Marvin's Arena: Marvin's Arena is a free and entertaining programming game. The game is designed to easily learn programming in any .NET compatible language. It is...MultiMediaPlayer: 整合我们的内容NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD is a simple distributed cache system written in .NET. NCacheD offers functionality similar to that of MemcacheD but scaled back. NCacheD ...NDAP: OPeNDAP is a client/server system for making local scientific data accessible to remote locations without regard to the local or remote storage for...Open Guide CMS: Open Guide makes your traveling through city more adventurous, mode educational and more funny. You can support us by lines of code, by interesti...Rollback - A social backup tool.: Rollback is a simple and intuitive social backup application. You can create multiple backup jobs with a few mouse clicks and even schedule it to ...sELedit: An editor for elements.data file of the popular MMORPG Perfect World.SharePoint Theme Applicator: SharePoint Theme Applicator will give SharePoint administrators the ability to apply a theme accross a whole web application (i.e. apply a theme to...sPATCH: ! sPATCH - Perfect World Client Autopatcher This beta patcher is an alternative to the default perfect world patcher. It offers easy client patc...Wiggle: A-life investigation. Let's make little squirmies!WPFSLBlendModeFx: A blend mode library for WPF & Silverlight.休息提醒: 程序员们,尤其是像我这样对程序痴狂的程序员们。一旦研究起自己感兴趣的程序时,觉得上厕所都是浪费时间。 这个程序是在设定的时间后锁定计算机或关闭显示器,从而从某种程度强迫程序员们去休息New ReleasesAMFParser: 1.1.0: Add handling of DSK object when using BlazeDS Add handling of string references Add handling of DateTime values Correct handling of Double values B...Camelot: Camelot 0.1 Alpha: Early release: 1) Query by content type, optionally list or base template 2) Query by list or base templateDictionary Translator for Umbraco: Dictionary Translator 1.1: This is a minor release that fixes a bug that Thomas Hohler found on the first day of release This package is to be used with the ASP.NET open sou...Dynamic Configuration: Sample Application Release 1: These binaries demonstrate the effect of using DynamicConfigurationManager. The source-code for these binaries is available in the 'Source Code' t...EF Dorsal: EFDorsal v0.3b: Second real version. This version add suport to types and entity sets in tree-view.Enterprise Library Extensions: Release 1.0: This is the initial release of the package. The release will contain one feature only, as being able to deploy the project itself it the milestone ...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.4 beta 2 Released: Hi, Today’s release contains fix for the following issues: * In WPF application chart was throwing exception as VisualStateGroup was not foun...GameOfLife: Game of life: First release of the game. PublishedGameStore League Manager: League Manager 1.0 release 2.: Fixes crashing bugs from the first release. To use: 1. Install SQL Server Express 2005 http://www.microsoft.com/Sqlserver/2005/en/us/express-down....Marvin's Arena: Version 0.0.5.0: Code Editor (development of robots without Visual Studio - no debugging) * Rounds * 3D Battle Engine: Skybox * 3D Battle Engine: Robot...Morphfolia - ASP.NET CMS and Framework: Morphfolia v2.4.1.1: Morphfolia v2.4.1.1 - New Release Includes: Better support for browsers other than IE (Chrome, Firefox, Safari - all tested on Windows) Supports ...NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD Version 1: Getting Started To get up and running, open two instances of Visual Studio 2010. In one window open the NCacheD client solution and then open the ...Papercut: Papercut 2010-3-3: This release includes a few bug fixes and updates, several of which were contributed patches (thanks!). Feature: Added support for embedded images...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.4: Perwapi version 1.1.4 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Prolog.NET: Prolog.NET 1.0 Beta 1.2: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench PrologTest console application all required dependencies Beta 1.2 in...Protoforma | Tactica Adversa: Skilful 0.2.4.320: BetaRoTwee: RoTwee 6.1.0.0: 16604 "Post playing tune feature" is added. Using this new feature, you can tweet tune playing in iTunes. 16625 Error processing for network error...sELedit: sELedit v1.0: -SharePoint 2007 Deployment Wizard: Support for SharePoint Server and Foundation 2010: This release encompasses the supported install paths for the default install of SPS 2010 (the 14 hive). All three versions are now supported (60 h...SharePoint Theme Applicator: SharePoint Theme Applicator: SharePoint Theme Applicator was built using C# and WPF, it includes the following features: Provides the total number of site collections in the g...Shinkansen: compress, crunch, combine, and cache JavaScript and CSS: Shinkansen 1.0.0.030310: Build 1.0.0.030310, binaries onlysPATCH: sPatcher v0.8: sPatch - Server Example *Contains a sample Patch that "downgrades" PWI 1.4.2 Client to an 1.3.6 ClientTFS Code Comment Checking Policy (CCCP): CCCP 3.0 for VSTS 2008 SP1: This release includes NRefactory 3.2.0.5571 and is built against VSTS 2008 SP1 (.NET 3.5 is required). New: horizontal scrollbar in listboxes for ...TortoiseHg: Beta for TortoiseHg 1.0 (0.9.31254): Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before installing Use the x86 msi file for 32 bit platforms and th...TwitEclipseAPI: TwitEclipseAPI 0.9: 0.9 Release of TwitEclipseAPI Moved API calls to the api.Twitter.com URL. Moved API calls to the versioning API. Now uses the increased Rate Limit...TwitterVB - A .NET Twitter Library: TwitterVB-2.3: Patch 5151: Added BlockedUsers function to get a page other then the first Patch 5420: The ListMembers function will now return more then just th...休息提醒: 初始版本: 初始版本Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsRawrBlogEngine.NETMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesMDT Web FrontEndsvn2tfsDiffPlex - a .NET Diff GeneratorIonics Isapi Rewrite FilterFarseer Physics Engine

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide

    - by Kavitha
    The ICC Cricket World Cup 2011 will be hosted jointly by Bangladesh, India and Sri Lanka. This 10th edition of World Cup is held between 19 February-2 April 2011. The World Cup drive will be starting in Dhaka on 19 February with the inaugural match between India and Bangladesh. The 43 days long ICC World Cup Cricket 2011 event will host 49 matches, day matches starting as early as 9.30am IST and day-night matches starting at 2.30pm IST. Here is our guide to follow 2011 ICC Cricket World Cup live on your computers, televisions,mobiles and radios Free Live Streaming On The Web (Official & Unofficial) http://espnstar.com will live stream all the matches of World Cup 2011 and they will be available in HD quality as they are the official broadcasters of World Cup 2011 cricket event. This is the first time ever a world cup cricket event is streamed online officially. If you are not able to access the official live streaming of Cricket World Cup due to regional restrictions, point your browser to any of the following unofficial live streams on the web. NOTE: MAKE SURE THAT YOUR ANTIVIRUS and ANTIMALWARE software are up and running before opening any of these sites. crictime.com - this site offers 6 live streaming servers that offer World Cup 2011 Cricket matches streams. Don’t mind the ads that are displayed left,right and center and just enjoy the cricket. Web pages dedicated for the world cup streaming are already live and you can bookmark them for your reference. cricfire.com/live-cricket: cricfire   gathers cricket live streams available around the web and provides them for easy access. Also they provide links for watching highlights and other post match analysis shows. Other sites that provide live streaming videos extracover.net webcric.com Searching for Unofficial Streams On Live Video Streaming Sites One of the best ways to find the unofficial streams is look for live streaming feeds on popular video streaming websites. We can be assured that these sites does not spread malware and spammy ads as they are well established. Here are the queries that you can use to search the popular sites FreedoCast  http://freedocast.com/search.aspx?go=cricket%20world%20cup Justin.tv      http://www.justin.tv/search?q=cricket+world+cup Ustream.tv  http://www.ustream.tv/discovery/live/all?q=cricket%20world%20cup TV Channels That Telecast Cricket World Cup Live Even though web is the place where we spend most of our time for entertainment, TVs are still popular for watching sports events. Mostly 90% of us are going to follow this cricket world cup matches on television sets. Here is the list of TV channels that paid whooping amounts of money for broadcasting rights and going to telecast live cricket Afghanistan – Ariana Television Network: Lemar TV Australia – Nine Network, Fox Sports Bangladesh – Bangladesh Television Canada – Asian Television Network China – ESPN Star Sports Europe (Except UK & Ireland) – Eurosport2 Fiji – Fiji TV India – ESPN Star Sports, Star Cricket, DD National (mostly India matches alone) Ireland – Zee Cafe Jamaica – Television Jamaica Middle East – Arab Radio and Television Network Nepal – ESPN Star Sports New Zealand – Sky Sport Pacific Islands – Sky Pacific Pakistan – GEO Super, Pakistan Television Corporation Pan-Africa – South African Broadcasting Corporation Singapore – Star Cricket South Africa – Supersport, Sabc3 Sport Sri Lanka – Sri Lanka Rupavahini Corporation United Kingdom – Sky Sports HD USA – Willow Cricket, DirecTV, Dish Network West Indies – Caribbean Media Corporation Radio Stations That Provide Live Commentary Don’t we listen to radio? Yes we still listen to radios, especially when we are on the go. Radios are part of our mobiles as well as music players like iPods. Here are the stations that you can tune into for catching live cricket commentary Australia – ABC Local Radio Bangladesh – Bangladesh Betar Canada , Central America – EchoStar India – All India Radio Pakistan, United Arab Emirates – Hum FM Sri Lanka – FM Derana United Kingdom, Ireland – BBC Radio West Indies – Caribbean Media Corporation Watch World Cup Cricket On Your Mobile This section is for Indian users. 3G rollout is happening at very high pace in all part of the India and most of the metros and towns are able to access 3G services. With 3G on your mobile you will be able to watch live ICC world cricket on your Reliance Mobiles and you can read more about it here. Top 10 Cricket Websites Check out our earlier post on top 10 cricket web sites for information. This article titled,ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Review of Samsung Focus Windows Phone 7

    - by mbcrump
    I recently acquired a Samsung Focus Windows Phone 7 device from AT&T and wanted to share what I thought of it as an end-user. Before I get started, here are several of my write-ups for the Windows Phone 7. You may want to check out the second article titled: Hands-on WP7 Review of Prototype Hardware. From start to finish with the final version of Visual Studio Tools for Windows Phone 7 Hands-on : Windows Phone 7 Review on Prototype Hardware. Deploying your Windows Phone 7 Application to the actual hardware. Profile your Windows Phone 7 Application for Free Submitting a Windows Phone 7 Application to the Market. Samsung Focus i917 Phone Size: Perfect! I have been carrying around a Dell Streak (Android) and it is about half the size. It is really nice to have a phone that fits in your pocket without a lot of extra bulk. I bought a case for the Focus and it is still a perfect size.  The phone just feels right. Screen: It has a beautiful Super AMOLED 480x800 screen. I only wish it supported a higher resolution. The colors are beautiful especially in an Xbox Live Game.   3G: I use AT&T and I've had spotty reception. This really can't be blamed on the phone as much as the actual carrier. Battery: I've had excellent battery life compared to my iPhone and Android devices. I usually use my phone throughout the day on and off and still have a charge at the end of the day.  Camera/Video: I'm still looking for the option to send the video to YouTube or the Image to Twitter. The images look good, but the phone needs a forward facing camera. I like the iPhone/Android (Dell Streak) camera better. Built-in Speaker: Sounds great. It’s not a wimpy speaker that you cannot hear.  CPU: Very smooth transitioning from one screen to another. The prototype Windows Phone 7 that I had, was no where near as smooth. (It was also running a slower processor though). OS: I actually like the OS but a few things could be better. CONS: Copy and Paste (Supposed to come in the next update) We need more apps (Pandora missing was a big one for me and Slacker’s advertisement sucks!). As time passes, and more developers get on board then this will be fixed. The browser needs some major work. I have tried to make cross-platform (WP7, Android, iPhone and iPad) web apps and the browser that ships with WP7 just can’t handle it.  Apps need to be organized better. Instead of throw them all on one screen, it would help to allow the user to create categories. PROS: Hands down the best gaming experience on a phone. I have all three major phones (iphone, android and wp7). Nothing compares to the gaming experience on the WP7. The phone just works. I’ve had a LOT of glitches with my Android device. I’ve had maybe 2 with my WP7 device. Exchange and Office support are great. Nice integration with Twitter/Facebook and social media. Easy to navigate and find the information you need on one screen. Let’s look at a few pictures and we will wrap up with my final thoughts on the phone. WP7 Home Screen. Back of the phone is as stylish. It is hard to see due to the shadow but it is a very thin phone. What’s included? Manuals Ear buds Data Cable plus Power Adapter Phone Click a picture to enlarge So, what are my final thoughts on the Phone/OS? I love the Samsung Focus and would recommend it to anyone looking for a WP7 device. Like any first generation product, you need to give it a little while to mature. Right now the phone is missing several features that we are all used to using. That doesn’t mean a year from now it will be in the same situation. (I sure hope we won’t). If you are looking to get into mobile development, I believe WP7 is the easiest platform to develop from. This is especially true if you have a background in Silverlight or WPF.    Subscribe to my feed

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Clean Up the New Ubuntu Grub2 Boot Menu

    - by Trevor Bekolay
    Ubuntu adopted the new version of the Grub boot manager in version 9.10, getting rid of the old problematic menu.lst. Today we look at how to change the boot menu options in Grub2. Grub2 is a step forward in a lot of ways, and most of the annoying menu.lst issues from the past are gone. Still, if you’re not vigilant with removing old versions of the kernel, the boot list can still end up being longer than it needs to be. Note: You may have to hold the SHIFT button on your keyboard while booting up to get this menu to show. If only one operating system is installed on your computer, it may load it automatically without displaying this menu. Remove Old Kernel Entries The most common clean up task for the boot menu is to remove old kernel versions lying around on your machine. In our case we want to remove the 2.6.32-21-generic boot menu entries. In the past, this meant opening up /boot/grub/menu.lst…but with Grub2, if we remove the kernel package from our computer, Grub automatically removes those options. To remove old kernel versions, open up Synaptic Package Manager, found in the System > Administration menu. When it opens up, type the kernel version that you want to remove in the Quick search text field. The first few numbers should suffice. For each of the entries associated with the old kernel (e.g. linux-headers-2.6.32-21 and linux-image-2.6.32-21-generic), right-click and choose Mark for Complete Removal. Click the Apply button in the toolbar and then Apply in the summary window that pops up. Close Synaptic Package Manager. The next time you boot up your computer, the Grub menu will not contain the entries associated with the removed kernel version. Remove Any Option by Editing /etc/grub.d If you need more fine-grained control, or want to remove entries that are not kernel versions, you must change the files located in /etc/grub.d. /etc/grub.d contains files that hold the menu entries that used to be contained in /boot/grub/menu.lst. If you want to add new boot menu entries, you would create a new file in this folder, making sure to mark it as executable. If you want to remove boot menu entries, as we do, you would edit files in this folder. If we wanted to remove all of the memtest86+ entries, we could just make the 20_memtest86+ file non-executable, with the terminal command sudo chmod –x 20_memtest86+ Followed by the terminal command sudo update-grub Note that memtest86+ was not found by update-grub because it will only consider executable files. However, instead, we’re going to remove the Serial console 115200 entry for memtest86+… Open a terminal window Applications > Accessories > Terminal. In the terminal window, type in the command: sudo gedit /etc/grub.d/20_memtest86+ The menu entries are found at the bottom of this file. Comment out the menu entry for serial console 115200 by adding a “#” to the start of each line. Save and close this file. In the terminal window you opened, enter in the command sudo update-grub Note: If you don’t run update-grub, the boot menu options will not change! Now, the next time you boot up, that strange entry will be gone, and you’re left with a simple and clean boot menu. Conclusion While changing Grub2’s boot menu may seem overly complicated to legacy Grub masters, for normal users, Grub2 means that you won’t have to change the boot menu that often. Fortunately, if you do have to do it, the process is still pretty easy. For more detailed information about how to change entries in Grub2, this Ubuntu forum thread is a great resource. If you’re using an older version of Ubuntu, check out our article on how to clean up Ubuntu grub boot menu after upgrades. Similar Articles Productive Geek Tips Clean Up Ubuntu Grub Boot Menu After UpgradesReinstall Ubuntu Grub Bootloader After Windows Wipes it OutChange the GRUB Menu Timeout on UbuntuHow To Switch to Console Mode for Ubuntu VMware GuestSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation"

    Read the article

  • How to Add a Business Card, or vCard (.vcf) File, to a Signature in Outlook 2013 Without Displaying an Image

    - by Lori Kaufman
    Whenever you add a Business Card to your signature in Outlook 2013, the Signature Editor automatically generates a picture of it and includes that in the signature as well as attaching the .vcf file. However, there is a way to leave out the image. To remove the business card image from your signature but maintain the attached .vcf file, you must make a change to the registry. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. Before changing the registry, we must add the Business Card to the signature and save it so a .vcf file of the contact is created in the Signatures folder. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. For this example, we will create a new signature to include the .vcf file for your business card without the image. Click New below the Select signature to edit box. Enter a name for the new signature, such as Business Card, and click OK. Enter text in the signature editor and format it the way you want or insert a different image or logo. Click Business Card above the signature editor. Select the contact you want to include in the signature on the Insert Business Card dialog box and click OK. Click Save below the Select signature to edit box. This creates a .vcf file for the business card in the Signatures folder. Click on the business card image in the signature and delete it. You should only see your formatted text or other image or logo in the signature editor. Click OK to save your new signature and close the signature editor. Close Outlook as well. Now, we will open the Registry Editor to add a key and value to indicate where to find the .vcf to include in the signature we just created. If you’re running Windows 8, press the Windows Key + X to open the command menu and select Run. You can also press the Windows Key + R to directly access the Run dialog box. NOTE: In Windows 7, select Run from the Start menu. In the Open edit box on the Run dialog box, enter “regedit” (without the quotes) and click OK. If the User Account Control dialog box displays, click Yes to continue. NOTE: You may not see this dialog box, depending on your User Account Control settings. Navigate to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Signatures Make sure the Signatures key is selected. Select New | String Value from the Edit menu. NOTE: You can also right-click in the empty space in the right pane and select New | String Value from the popup menu. Rename the new value to the name of the Signature you created. For this example, we named the value Business Card. Double-click on the new value. In the Value data edit box on the Edit String dialog box, enter the value indicating the location of the .vcf file to include in the signature. The format is: <signature name>_files\<name of .vcf file> For our example, the Value data should be as follows: Business Card_files\Lori Kaufman The name of the .vcf file is generally the contact name. If you’re not sure of what to enter for the Value data for the new key value, you can check the location and name of the .vcf file. To do this, open the Outlook Options dialog box and access the Mail screen as instructed earlier in this article. However, press and hold the Ctrl key while clicking the Signatures button. The Signatures folder opens in Windows Explorer. There should be a folder in the Signatures folder named after the signature you created with “_files” added to the end. For our example, the folder is named Business Card_files. Open this folder. In this folder, you should see a .vcf file with the name of your contact as the name of the file. For our contact, the file is named Lori Kaufman.vcf. The path to the .vcf file should be the name of the folder for the signature (Business Card_files), followed by a “\”, and the name of the .vcf file without the extension (Lori Kaufman). Putting these names together, you get the path that should be entered as the Value data in the new key you created in the Registry Editor. Business Card_files\Lori Kaufman Once you’ve entered the Value data for the new key, select Exit from the File menu to close the Registry Editor. Open Outlook and click New Email on the Home tab. Click Signature in the Include section of the New Mail Message tab and select your new signature from the drop-down menu. NOTE: If you made the new signature the default signature, it will be automatically inserted into the new mail message. The .vcf file is attached to the email message, but the business card image is not included. All you will see in the body of the email message is the text or other image you included in the signature. You can also choose to include an image of your business card in a signature with no .vcf file attached.     

    Read the article

  • &ldquo;My life at Oracle&rdquo;

    - by cristian.condurache(at)oracle.com
    Hello everybody! My name is Eva and I currently work in Oracle Italy as Sales Programs Manager for the Technology Sales organization. Since 2009, I also proudly represent the Oracle Education Foundation within my country as the Ambassador for Italy. My career path in this amazing company began 5 years ago as a fresh graduate: after various years studying abroad, in Germany and Ireland mainly, I was looking for a valuable and concrete opportunity which could fulfill my energetic spirit. I wanted to develop myself inside a stimulating and “fast” business environment.. and here came Oracle and I really couldn’t ask for anything better!  THE PARTNER EXPERIENCE The first department I had the chance to work into was the Alliances and Channels organization, where I had the opportunity to join a brilliant team of great and visionary guys. I began having the responsibility to analyze and rationalize the portfolio of Oracle business partners and to identify potential cross-area solutions, which had to be highlighted both on the local market and internationally: this ended up with the implementation of the “Partner Community” model, a business environment of selected Oracle partners, specialized on the different technology focus areas. This new concept was then recognized as an EMEA Best Practice and replicated internationally. Having the opportunity to strengthen day after day strategic relationships with several business partners and study the market positioning of their technology solutions, I was given the role to develop the “Oracle Partner Network Innovation Award” in Italy: the EMEA competition encouraging and rewarding proven and successful technology innovations, creating high value for our common customers and generating new business potential. Several Italian partner solutions won different prizes and I decided that it was worth collecting all those valuable projects, winners and short-listed, inside two specific books in order also to provide them an international market visibility: OPN Innovation Award Booklet 2007 and OPN Innovation Award Booklet 2008 Inside the Alliances and Channels department I really had the opportunity to do    amazing things, like for example working side-by-side with one of the most exceptional teams in Oracle I have ever worked with: the EMEA Recruitment Team. Together, in fact, we conceived a brand new business initiative for our partners, called “Oracle Campus Joint Program”. This program was awarded as an EMEA Best Practice and acknowledged by both Italian public institutions and press media. Italy   is currently running its 5th edition.   Briefly, the “Oracle Campus Joint Program” aims at facing the growing issue of lack of  technology competences and skills on the market. By identifying a specific technology area and developing an intensive 4-6 week Oracle University training course and by collaborating with important academic institutes, international “gurus” and professionals, our business partners are able to benefit from a pool of brilliant top talented young consultants and offer them a significant career opportunity. BUSINESS BUT NOT ONLY: THE NO-PROFIT EXPERIENCE OF ORACLE Currently my mission in Oracle is to continue driving the implementation of strategic business development and sales programs for the entire Oracle Technology stack, involving both partners and the end-customers. But as a completely distinguished role from the day-today business, I’m also honored to represent in Italy the charity global organization founded by Oracle - the Oracle Education Foundation - and drive its corporate citizenship and marketing programs. Oracle Education Foundation is an independent charitable organization funded by Oracle and is dedicated to helping students develop 21st century skills through project learning and the use of technology. It provides “ThinkQuest” as a free program to primary and secondary (K12) schools. Just some significant numbers: today 548,000 students/teachers in 47 countries use ThinkQuest and the Oracle Education Foundation partners with 40+ no-profit or government organizations globally. ABOUT MYSELF AND MY INTERESTS About myself…I’m very enthusiastic and positive, trying always to transform difficult issues in challenging opportunities. My day usually begins very early in the morning with running, swimming or when I need to collect some “zen” energies with a yoga session or better with a long walk with my dog. I definitely love animals and generally speaking I’m very keen on environmental issues and try, as much as I can, to carry out a healthy and “planet respectful” lifestyle. My thirst for knowledge pushed me some time ago to begin a new personal challenge: I decided to enroll, dedicating a good part of my free time, for a second university degree: I chose “Neuroeconomics”, an innovative academic path which combines psychology, economics, and neuroscience and studies how people make decisions and the role of the brain when people evaluate these decisions, categorizing risks and rewards and generally interacting with each other. I’ve been very glad to talk about my experience in this article, as working for Oracle is something very stimulating. This company ensures you the opportunity to face new challenges, work with highly talented people and be professionally highlighted also globally. Motivation, good results and innovation is always pursued, recognized and fully supported. Thanks and wish you all an amazing career! If you have any question please contact [email protected]. For our job opportunities, please look at http://campus.oracle.com.   Technorati Tags: EMEA,Oracle Partners,Oracle Campus,Oracle Education,experience,EMEA Recruitment Team

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >