Search Results

Search found 3054 results on 123 pages for 'sticky notes'.

Page 102/123 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Integrate Nitro PDF Reader with Windows 7

    - by Matthew Guay
    Would you like a lightweight PDF reader that integrates nicely with Office and Windows 7?  Here we look at the new Nitro PDF Reader, a nice PDF viewer that also lets you create and markup PDF files. Adobe Reader is the de-facto PDF viewer, but it only lets you view PDFs and not much else.  Additionally, it doesn’t fully integrate with 64-bit editions of Vista and Windows 7.  There are many alternate PDF readers, but Nitro PDF Reader is a new entry into this field that offers more features than most PDF readers.  From the creators of the popular free PrimoPDF printer, the new Reader lets you create PDFs from a variety of file formats and markup existing PDFs with notes, highlights, stamps, and more in addition to viewing PDFs.  It also integrates great with Windows 7 using the Office 2010 ribbon interface. Getting Started Download the free Nitro PDF Reader (link below) and install as normal.  Nitro PDF Reader has separate versions for 32 & 64-bit editions of Windows, so download the correct one for your computer. Note:  Nitro PDF Reader is still in Beta testing, so only install if you’re comfortable with using beta software. On first run, Nitro PDF Reader will ask if you want to make it the default PDF viewer.  If you don’t want to, make sure to uncheck the box beside Always perform this check to keep it from opening this prompt every time you use it. It will also open an introductory PDF the first time you run it so you can quickly get acquainted with its features. Windows 7 Integration One of the first things you’ll notice is that Nitro PDF Reader integrates great with Windows 7.  The ribbon interface fits right in with native applications such as WordPad and Paint, as well as Office 2010. If you set Nitro PDF Reader as your default PDF viewer, you’ll see thumbnails of your PDFs in Windows Explorer. If you turn on the Preview Pane, you can read full PDFs in Windows Explorer.  Adobe Reader lets you do this in 32 bit versions, but Nitro PDF works in 64 bit versions too. The PDF preview even works in Outlook.  If you receive an email with a PDF attachment, you can select the PDF and view it directly in the Reading Pane.  Click the Preview file button, and you can uncheck the box at the bottom so PDFs will automatically open for preview if you want.   Now you can read your PDF attachments in Outlook without opening them separately.  This works in both Outlook 2007 and 2010. Edit your PDFs Adobe Reader only lets you view PDF files, and you can’t save data you enter in PDF forms.  Nitro PDF Reader, however, gives you several handy markup tools you can use to edit your PDFs.  When you’re done, you can save the final PDF, including information entered into forms. With the ribbon interface, it’s easy to find the tools you want to edit your PDFs. Here we’ve highlighted text in a PDF and added a note to it.  We can now save these changes, and they’ll look the same in any PDF reader, including Adobe Reader. You can also enter new text in PDFs.  This will open a new tab in the ribbon, where you can select basic font settings.  Select the Click To Finish button in the ribbon when you’re finished editing text.   Or, if you want to use the text or pictures from a PDF in another application, you can choose to extract them directly in Nitro PDF Reader.  Create PDFs One of the best features of Nitro PDF Reader is the ability to create PDFs from almost any file.  Nitro adds a new virtual printer to your computer that creates PDF files from anything you can print.  Print your file as normal, but select the Nitro PDF Creator (Reader) printer. Enter a name for your PDF, select if you want to edit the PDF properties, and click Create. If you choose to edit the PDF properties, you can add your name and information to the file, select the initial view, encrypt it, and restrict permissions. Alternately, you can create a PDF from almost any file by simply drag-and-dropping it into Nitro PDF Reader.  It will automatically convert the file to PDF and open it in a new tab in Nitro PDF. Now from the File menu you can send the PDF as an email attachment so anyone can view it. Make sure to save the PDF before closing Nitro, as it does not automatically save the PDF file.   Conclusion Nitro PDF Reader is a nice alternative to Adobe Reader, and offers some features that are only available in the more expensive Adobe Acrobat.  With great Windows 7 integration, including full support for 64-bit editions, Nitro fits in with the Windows and Office experience very nicely.  If you have tried out Nitro PDF Reader leave a comment and let us know what you think. Link Download Nitro PDF Reader Similar Articles Productive Geek Tips Install Adobe PDF Reader on Ubuntu EdgySubscribe to RSS Feeds in Chrome with a Single ClickChange Default Feed Reader in FirefoxFix for Windows Explorer Folder Pane in XP Becomes Grayed OutRemove "Please wait while the document is being prepared for reading" Message in Adobe Reader 8 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • Secure wipe of a hard drive using WinPE.

    - by Derek Meier
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The wiping of a hard drive is typically seen as fairly trivial.  There are tons of applications out there that will do it for you.  Point àClickàGlobal-Thermo Nuclear War. However, these applications are typically expensive or unreliable.  Plus, if you have a laptop or lack a secondary computer to put the hard drive into – how on earth do you wipe it quickly and easily while still conforming to a 7 pass rule (this means that every possible bit on the hard drive is set to 0 and then to 1 seven times in a row)?  Yes, one pass should be enough – as turning every bit from a 1 to a zero will wipe the data from existence.  But, we’re dealing with tinfoil hat wearing types here people.  DOD standards dictate at least 3 passes, and typically 7 is the preferred amount.  I’m not going to argue about data recovery.  I have been told to use 7 passes, and so I will.  So say we all! Quite some time ago I used to make a BartPE XP-based boot cd for the original purpose of securely wiping data.  I loved BartPE and integrated so many plugins into my builds that I could do pretty much anything directly from CD.  Reset passwords, uninstall security updates, wipe drives, chkdsk, remove spyware, install Windows, etc.  However, with the newer multi-core systems and new chipsets coming out from vendors, I found that BartPE was rather difficult to keep up to date.  I have since switched to WinPE 3.0 (Windows Preinstallation Environment). http://technet.microsoft.com/en-us/library/cc748933(WS.10).aspx  It is fairly simple to create your own CD, and I have made a few helpful scripts to easily integrate drivers and rebuild the ISO file for you.  I’ll cover making your own boot CD utilizing WinPE 3.0 in a later post – I can talk about WinPE forever and need to collect my thoughts!!  My wife loves talking about WinPE almost as much as talking about Doctor Who.  Wait, did I say loves?  Hmmmm, I may have meant loathes. The topic at hand?  Right. Wiping a drive! I must have drunk too much coffee this morning.  I like to use a simple batch script that calls a combination of diskpart.exe from Microsoft® and Sdelete.exe created by our friend Mark Russinovich. http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx All of the following files are located within the same directory on my WinPE boot CD. Here are the contents of wipe_me.bat, script.txt and sdelete.reg. Wipe_me.bat:   @echo off echo. echo     I will completely wipe the local hard drives using echo     7 individual wipes. The data will NOT echo     be recoverable.  I will begin after you pause echo. echo Preparing to partition and format disk. Diskpart.exe /s "script.txt" REM I was annoyed by not having a completely automated script – and Sdelete wants you to accept the license agreement. So, I added a registry file to skip doing that. regedit /S sdelete.reg rem sdelete options selected are: -p (passes) -c (zero free space) -s (recurse through subdirectories, if any) -z (clean free space) [drive letter] sdelete.exe -p 7 -c -s -z c: echo. echo Pass seven complete. echo. echo Wiping complete. Pause exit script.txt: list disk select disk 0 clean create partition primary select partition 1 active format FS=NTFS LABEL="New Volume" QUICK assign letter=c exit *Notes: This script assumes one local hard drive – change the script as you see fit for your environment.  The clean command will overwrite the master boot record and any hidden sector information – so be careful!   sdelete.reg: Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Sysinternals\SDelete] "EulaAccepted"=dword:00000001   With a combination of WinPE, sdelete.exe and your friendly neighborhood text editor you can begin wiping drives as quickly and easily as possible!  I hope this helps, I get asked this a lot in my line of work. Best of luck, Derek

    Read the article

  • Developer Training – A Conclusive Summary- Part 5

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 We have now reached the end of our series about developer training.  I hope you have come away thinking that training is the best way to advance in your company and that you are looking for training opportunities right now.  If you’re still not convinced here are a few things to keep in mind:  Training benefits the employer and the employee. A well trained employee is a happy employee, and a happy employee is more efficient and productive. Training an employee might be expensive, but it is less expensive than hiring a new person. Whether you are looking at him from the employee’s or the company’s point of view, there are always advantages to training. A Broader View This series is definitely written for Developer Training but it is not limited to developers only. There are IT Pro, System Admins, DBAs as well many other technology professionals; this article series is for all professionals in the world. The concepts and take away will remain common across all the platform and regardless of technology affiliation. Pass the Knowledge If I have to pick one advise which is extremely important related to training, I will pick – pass the knowledge. Once you have decided in favor of training, there is more to it than simply showing up and staying awake.  It is always a good idea to take notes – at the very least it will help you stay awake, but they will often serve as a good way to remember your training when you go back to work.  You can also use them to pass your new knowledge on to fellow employees, which can be very fun and rewarding. Right Place, Right Time and Right Training There are so many ways to get developer training.  In-person and on the job training is easy to come by and is the most usual type of training, but don’t overlook my favorite type of training: On Demand.  Being able to learn at your own pace, own place and on your own time will make training a realistic goal for almost every employee. I can think of nothing more important in life than furthering your education.  Especially when you work in a field that is constantly changing – like technology.  Whether you like it or not, training is incredibly important.  That is why I feel it is so important to receive training.  And because there are so many different training formats – live, online, through books, through people – I am certain that we all can find a way to be trained that best suits our goals and personalities. The Teacher Within If you think of anyone who is a master of the technology field or an incredibly successful developer (the obvious examples that spring to mind are Steve Jobs or Bill Gates), you will also find a teacher.  Both these individuals spent their lives developing better technology, but also educating other developers and the public about how to use these technologies and how it can change your life for the better.  I think that we all should strive to be like these wonderful teachers.  We might not be able to change the world, but we can certainly change a few lives around us. Even if we never turn into trainers ourselves , being trained as a student can be a good exercise.  We learn a lot and become better employees – and it would not be a stretch to say that this makes us better individuals, as well. Final Say I think learning and growing in your chosen field is not only a good idea, career-wise, but can be fun, too!  I for one never feel more alive than when I am learning about something I am really passionate about.  I think my job title – technology evangelist – explains how enthusiastic I am about this subject.  But please don’t think that I am thinking of this as someone who wants to train and educate others (although this is also one of my passions).  I am also a passionate student.  I enjoy learning new things and am always on the lookout for new ways to learn and new people to learn from. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Kaiden and the Arachnoid Cyst

    - by Martin Hinshelwood
    Some of you may remember when my son Kaiden was born I posted pictures of him and his sister. Kaiden is now 15 months old and is progressing perfectly in every area except that and we had been worried that he was not walking yet. We were only really concerned as his sister was walking at 8 months. Figure: Kai as his usual self   Jadie and I were concerned over that and that he had a rather large head (noggin) so we talked to various GP’s and our health visitor who immediately dismissed our concerns every time. That was until about two months ago when we happened to get a GP whose daughter had Hyper Mobility and she recognised the symptoms immediately. We were referred to the Southbank clinic who were lovely and the paediatrician confirmed that he had Hyper Mobility after testing all of his faculties. This just means that his joints are overly mobile and would need a little physiotherapy to help him out. At the end the paediatrician remarked offhand that he has a rather large head and wanted to measure it. Sure enough he was a good margin above the highest percentile mark for his height and weight. The paediatrician showed the measurements to a paediatric consultant who, as a precautionary measure, referred us for an MRI at Yorkhill Children's hospital. Now, Yorkhill has always been fantastic to us, and this was no exception. You know we have NEVER had a correct diagnosis for the kids (with the exception of the above) from a GP and indeed twice have been proscribed incorrect medication that made the kids sicker! We now always go strait to Yorkhill to save them having to fix GP mistakes as well. Monday 24th May, 7pm The scan went fantastically, with Kaiden sleeping in the MRI machine for all but 5 minutes at the end where he waited patiently for it to finish. We were not expecting anything to be wrong as this was just a precautionary scan to make sure that nothing in his head was affecting his gross motor skills. After the scan we were told to expect a call towards the end of the week… Tuesday 25th May, 12pm The very next day we got a call from Southbank who said that they has found an Arachnoid Cyst and could we come in the next day to see a Consultant and that Kai would need an operation. Wednesday 26th May, 12:30pm We went into the Southbank clinic and spoke to the paediatric consultant who assured us that it was operable but that it was taking up considerable space in Kai’s head. Cerebrospinal fluid is building up as a cyst is blocking the channels it uses to drain. Thankfully they told us that prospects were good and that Kai would expect to make a full recovery before showing us the MRI pictures. Figure: Normal brain MRI cross section. This normal scan shows the spaces in the middle of the brain that contain and produce the Cerebrospinal fluid. Figure: Normal Cerebrospinal Flow This fluid is needed by the brain but is drained in the middle down the spinal column. Figure: Kai’s cyst blocking the four channels. I do not think that I need to explain the difference between the healthy picture and Kai’s picture. However you can see in this first picture the faint outline of the cyst in the middle that is blocking the four channels from draining. After seeing the scans a Neurosurgeon has decided that he is not acute, but needs an operation to unblock the flow. Figure: OMFG! You can see in the second picture the effect of the build up of fluid. If I was not horrified by the first picture I was seriously horrified by this one. What next? Kai is not presenting the symptoms of vomiting or listlessness that would show an immediate problem and as such we will get an appointment to see the Paediatric Neurosurgeon at the Southern General hospital in about 4 weeks. This timescale is based on the Neurosurgeon seeing the scans. After that Kai will need an operation to release the pressure and either remove the cyst completely or put in a permanent shunt (tube from brain to stomach) to bypass the blockage. We have updated his notes for the referral with additional recent information on top of the scan that the consultant things will help improve the timescales, but that is just a guess.   All we can do now is wait and see, and be watchful for tell tail signs of listlessness, eye problems and vomiting that would signify a worsening of his condition.   Technorati Tags: Personal

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • Oracle Open World 2012: SQL Developer Recap

    - by thatjeffsmith
    Last week was the ‘big show’ in San Francisco. I was very happy to meet many of you in person. And many of you had questions – lots of questions! We had full or overflowing rooms for our sessions and hands-on-labs. The SQL Developer ‘booths’ were also slammed several times. So exciting to see so many of YOU excited about SQL Developer. It’s very cool to hear the stories of our tools saving you and your organizations so much time (and money!) Instead of doing a Day 0 – Day 9 recap, I thought I’d share with you the questions that I heard more than once. And just for giggles, I’ll throw in some answers as well So in no particular order… What’s the difference between Oracle SQL Developer & Oracle SQL Developer Data Modeler? Mathematically speaking – two words. But as far as the actual modeling features go, there’s no difference between the two applications. The same ‘code’ or features as it pertains to data modeling and design are in both tools. However, in SQL Developer you have all of the OTHER features fighting for real estate in the UI. So I have a general rule of thumb – if you spend MOST of your time in the database, use SQL Developer. And if you spend most of your time in the data model, run the separate and dedicated program, Oracle SQL Developer Data Modeler. Here’s a couple of screenshots to drive home the UI point: Oracle SQL Developer Oracle SQL Developer Data Modeler running INSIDE of SQL Developer. Notice how the Modeler menu items fold under the file menu? Oracle SQL Developer Data Modeler Easier to navigate and manipulate your models with the stand alone modeler. Just no worksheet to run your ad-hoc queries, etc. Don’t forget you can disable the Data Modeler inside of SQL Developer via the Extensions preference page. How can I model my table partitions? Partitioning is defined via the Physical model. So after you have finished your relational model, you need to generate a physical model. Oracle SQL Developer Data Modeler Physical Model and Partitioning Open the properties for your physical model table. Enable the ‘partitioned’ property. Once you do so, the ‘Partitioning’ page will activate. Lots and lots of partitioning support and options here But what about Interval Partitioning? An extension of range partitioning in 11gR2, we don’t currently support this partitioning scheme in SQL Developer. But we’re working on it! Can SQL Developer ignore column order when comparing models? Yes! After you start a model compare, one of your options is to disregard the order of an attribute or column definition. Tell SQL Developer you don’t care when your column shows up, just as long as it DOES show up. Wow, you got a lot of questions around modeling! Is that normal? Yes! While we appreciate that many folks inherit their applications and associated designs, new applications are being ‘born’ every day. Since both of our tools are free for anyone to design their new Oracle applications with, we attract a fair amount of attention I want to do a Hands On Lab. How do I get your software and instructional guides? Go here. Download VirtualBox. Then download the VB image. Import the appliance. Start it. Connect oracle/oracle on the OEL VM. Click on ‘Start Here’ in the desktop. Follow the instructions. If you need help, ask away! You went too fast in your Tips & Tricks session. Do you have cliff notes? Yes! And you’re SO close to finding them! Just go to my SQL Developer resources page. All of my tips are documented on this blog somewhere. I’ve indexed the most popular ones on the resource page. You can use the Search dialog on the right to find the rest. Or just send me a comment or question, and I’ll do my best to answer them as they come in.

    Read the article

  • How to deploy Document Set using CAML in SharePoint2010 solution package

    - by ybbest
    In my last post, I showed you how to use Document Set using SharePoint UI in the browser. In this post, I’d like to show you how to create the same Document Set using CAML and SharePoint solution package. You can download the complete solution here. 1. Create the Application Number site column using the SharePoint empty element item template in VS2010 <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Field Type="Text" DisplayName="ApplicationNumber" Required="FALSE" EnforceUniqueValues="FALSE" Indexed="FALSE" MaxLength="255" Group="YBBEST" ID="{916bf3af-5ec1-4441-acd8-88ff62ab1b7e}" Name="ApplicationNumber" ></Field> </Elements> 2. Create the Loan Application Form and Loan Contract Form content types. <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x0101005dfbf820ce3c49f69c73a00e0e0e53f6" Name="Loan Contract Form" Group="YBBEST" Description="Loan Contract Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x010100f3016e3d03454b93bc4d6ab63941c0d2" Name="Loan Application Form" Group="YBBEST" Description="Loan Application Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> </Elements> 3. Create the Loan Application Document Set. 4. Create the Document Set Welcome Page using the SharePoint Module item template. Notes: 1.When creating document set content type , you need to set the  Inherits=”FALSE”  or remove the  Inherits=”TRUE” from the content type definition (default is  Inherits=”FALSE”) . This is the Document Set limitation in the current version of SharePoint2010. Because of this , you also need to manually  attach the event receiver and  Document Set welcome page to your custom Document Set Content Type. 2. Shared Fields are push down only: 3. Not available in SharePoint foundation (only SharePoint Server 2010). 4. You can’t have folders within document sets (you can place document sets in folders though). For a complete limitation and considerations , you can see the references for details. References: Document Set Limitations and Considerations in SharePoint 2010 1 Document Set Limitations and Considerations in SharePoint 2010 2 Document Sets planning (SharePoint Server 2010) Import Document Sets Issue http://msdn.microsoft.com/en-us/library/gg581064.aspx http://channel9.msdn.com/Events/TechEd/NorthAmerica/2010/OSP305 DocumentSet Class

    Read the article

  • How to deploy Document Set using CAML in SharePoint2010 solution package

    - by ybbest
    In my last post, I showed you how to use Document Set using SharePoint UI in the browser. In this post, I’d like to show you how to create the same Document Set using CAML and SharePoint solution package. You can download the complete solution here. 1. Create the Application Number site column using the SharePoint empty element item template in VS2010 <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Field Type="Text" DisplayName="ApplicationNumber" Required="FALSE" EnforceUniqueValues="FALSE" Indexed="FALSE" MaxLength="255" Group="YBBEST" ID="{916bf3af-5ec1-4441-acd8-88ff62ab1b7e}" Name="ApplicationNumber" ></Field> </Elements> 2. Create the Loan Application Form and Loan Contract Form content types. <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x0101005dfbf820ce3c49f69c73a00e0e0e53f6" Name="Loan Contract Form" Group="YBBEST" Description="Loan Contract Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> <!-- Parent ContentType: Document (0x0101) --> <ContentType ID="0x010100f3016e3d03454b93bc4d6ab63941c0d2" Name="Loan Application Form" Group="YBBEST" Description="Loan Application Form" Inherits="TRUE" Version="0"> <FieldRefs> <FieldRef ID="916bf3af-5ec1-4441-acd8-88ff62ab1b7e" Name="ApplicationNumber" DisplayName="ApplicationNumber" /> </FieldRefs> </ContentType> </Elements> 3. Create the Loan Application Document Set. 4. Create the Document Set Welcome Page using the SharePoint Module item template. Notes: 1.When creating document set content type , you need to set the  Inherits=”FALSE”  or remove the  Inherits=”TRUE” from the content type definition (default is  Inherits=”FALSE”) . This is the Document Set limitation in the current version of SharePoint2010. Because of this , you also need to manually  attach the event receiver and  Document Set welcome page to your custom Document Set Content Type. 2. Shared Fields are push down only: 3. Not available in SharePoint foundation (only SharePoint Server 2010). 4. You can’t have folders within document sets (you can place document sets in folders though). For a complete limitation and considerations , you can see the references for details. References: Document Set Limitations and Considerations in SharePoint 2010 1 Document Set Limitations and Considerations in SharePoint 2010 2 Document Sets planning (SharePoint Server 2010) Import Document Sets Issue http://msdn.microsoft.com/en-us/library/gg581064.aspx http://channel9.msdn.com/Events/TechEd/NorthAmerica/2010/OSP305 DocumentSet Class

    Read the article

  • Invariant code contracts – using class-wide contracts

    - by DigiMortal
    It is possible to define invariant code contracts for classes. Invariant contracts should always hold true whatever member of class is called. In this posting I will show you how to use invariant code contracts so you understand how they work and how they should be tested. This is my randomizer class I am using to demonstrate code contracts. I added one method for invariant code contracts. Currently there is one contract that makes sure that random number generator is not null. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer() { }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     }       [ContractInvariantMethod]     private void ObjectInvariant()     {         Contract.Invariant(_generator != null);     } } Invariant code contracts are define in methods that have ContractInvariantMethod attribute. Some notes: It is good idea to define invariant methods as private. Don’t call invariant methods from your code because code contracts system does not allow it. Invariant methods are defined only as place where you can keep invariant contracts. Invariant methods are called only when call to some class member is made! The last note means that having invariant method and creating Randomizer object with null as argument does not automatically generate exception. We have to call at least one method from Randomizer class. Here is the test for generator. You can find more about contracted code testing from my posting Code Contracts: Unit testing contracted code. There is also explained why the exception handling in test is like it is. [TestMethod] [ExpectedException(typeof(Exception))] public void Should_fail_if_generator_is_null() {     try     {         var randomizer = new Randomizer(null);         randomizer.GetRandomFromRangeContracted(1, 4);     }     catch (Exception ex)     {         throw new Exception(ex.Message, ex);     } } Try out this code – with unit tests or with test application to see that invariant contracts are checked as soon as you call some member of Randomizer class.

    Read the article

  • Agile Testing Days 2012 – Day 1 – The birth of the #unicorn…

    - by Chris George
    Still riding the high from the tutorial day, I arrived at the conference venue eager to get cracking with the days talks. The opening Keynote was “Disciplined Agile Delivery: The Foundation for Scaling Agile” presented by Scott Ambler. The general ideas behind the methodology such as not re-inventing the wheel, and being goal driven, not prescriptive in how you work certainly struck chords with how we are trying to work in my team. Scott made some interesting observations about how scrum is quite prescriptive and is this really agile? I agreed with quite a few of his points on how what works for one team may not work for another. How a team works should be driven by context and reflection, not process and prescription. However was somewhat dubious about some of the statistics he rolled out towards the end. However, out of this keynote was born something that was to transcend this one presentation. During the talk, Scott mentioned on more than one occasion “In the real world”, and at one point made reference to people living in the land of unicorns and rainbows. The challenge was then laid down on twitter for all speakers to include a unicorn in their presentations… and for the most part this happened! It became an identity for this years conference, and I’m sure something that any attendee will always associate with Agile Testing Days 2012! Following this keynote, I attended “Going agile with Automated GUI Testing – Some personal insights” by Jan Zdunek from codecentric on the vendor track. My speciality is test automation, and in particular GUI testing, so this drew me to this talk more than the others. Thankfully, it was made clear from the very start that this was not peddling any particular product (even though it was on the vendor track), and Jan faithfully stuck to that. Most of the content was not new to me, but it was really comforting to hear someone else with very similar experiences to my own. In particular, things like how GUI testing is hard and is not a silver bullet; how record & replay is NOT a good thing to do (which drew a somewhat inflammatory tweet from an automation company when I tweeted that!). Something that I have started hearing around the place, and has certainly been murmuring at work is to push more of the automation coding onto the developers. After all they are the coding experts. I agree with this to a degree, but I personally enjoy coding and find it very rewarding doing so, therefore I’d be reluctant to give it up. I think there are some better alternatives such as pairing with a developer. Lastly, Jan mentioned, almost in passing, that we should consider virtualisation for gui testing for covering configuration combinations. On my project we’ve been running our win32/.NET GUI tests in cloud virtualisation for a couple of years now… I really should write about that! After lunch the second keynote of the day was by Lisa Crispin and Janet Gregory,”Myths about Agile Testing, De-Bunked”. It started off well… with the two ladies donning Medusa style head bands whilst they disbanding several myths about agile testing! I got the impression that it was perhaps not as slick as they would have liked, but then Janet was suffering with a very sore throat so kept losing her voice. Nevertheless, the presentation was captivating, and they debunked several myths such as : “Testing is dead”, “Testers must write code”, “Agile teams always deliver faster”. I didn’t take many notes for this because it was being recorded, but unfortunately the recordings have not been posted yet so I’ll write more about this when they are. The TestLab was held during a somewhat free for all time during most of the afternoon. It looked intriguing and proved to be one of the surprising experiences of the conference for me. Run by James Lyndsay and Bart Knaack, it consisted of a number of ‘stations’ that offered different testing problems. I opted for testing a mathematical drawing app call Geogebra, the task being to pair up and exploratory test it. After an allotted time, we discussed issues we’d found and decided if we wanted to continue ‘playing’ to which we all agreed! It was fun! The last track talk of the day was “Developers Exploratory Testing – Raising the bar” by Sigge Birgisson. One of the teams at Red Gate have tried Dev or Team exploratory testing a couple of times, and I was really interested to go to the presentation that prompted that. I was not disappointed! Sigge gave a first class presentation, and not only explained what DET was all about, but also how to go about implementing it. Little tips like calling it a ‘workshop’ rather than ‘testing’ I can really see working! Monday evening saw the presentation of the award for the Most Influential Agile Testing Professional Person go to a much deserved Lisa Crispin. The evening was great, with acrobatics, magic and music. My Takeaway Triple from Day 1:  Some of the cool stuff that was suggested in the GUI Testing talk, we are already doing. I should write about that! Testing is not dead! Perhaps testing will become more of a skill than a specific role, but it is certainly not dead. Team/Developer exploratory testing… seems like a no-brainer assuming you have a team who is willing.  Day 2 – Coming soon…

    Read the article

  • ISO Files to USB &ndash; The Cheap and Easy Way

    - by RonGarlit
    (DISCLAIMER: Yes there are lots of more elegant ISO software beside the free Microsoft one I’m about to show. But free is free and it has been tested and works for me for making advance bootable USB drives. That is another story. Look up Windows 8 Developer Preview for that one on BING.) For those of use that work with new technology all the time we accumulate a lot of ISO files and have to burn them to CD/DVD’s quite often. But we now have machines without burner in the corporate environment. We have personally Netbooks and light wait highly mobile laptops that do not have DVD burner. USB ports are all the rage and now we have USB 3.0 which is way faster than the 2.0 we are used to. Just looking at the technology, space saving and the cost issues alone is a reason to buy these answer to the DVD’s. So what is special about USB 2.0 and USB 3.0? USB 2 has a maximum speed of 480 Mbps... (That is Megabits per SECOND!!) Now look at the storage that we have with USB thumb drives that are now up to 64 GB in size, cell phone and PDAs that have a lots of internal storage built in well above the 16 Gig range. At the MAX USB 2.0 speed of 480 Mbps a full transfer of data in between devices can take a long time. Time is money right. Every back up a iPhone? Don’t get me started. So at least the engineers have been planning ahead with USB 3.0 which offers a maximum transfer speed of 4.8 Gbps... (That is Giga bits per SECOND!!) That speed is almost 10 times faster than USB 2.0 …. We don’t need to do the math on that one do we? But for now I'm thrilled with USB 2.0 and the fact I can get these little 4 Gig USB drives for $4.00 each at Staples on sale. Well that is a no brainer don’t you think. But what can you do with them to replace that DVD. Simply and cheaply put………. THIS! First let’s get an ISO file like the Visual Studio 2010 Ultimate DVD ISO from MSDN to demonstrate with. I develop on several computers so this is a good choice for me. So we downloaded the ISO file and put it in a folder somewhere like this. Next we go download to the Windows 7 USB/DVD Download Tool site and read about the tool. http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool And click this like to get the tool and install it. Once it is installed you go to the Start, Programs menu, Windows 7 USB DVD Download Tool folder. And then click the tool to open it up. As you will see it is a sweet, simple tool that was originally designed to put the ISO for Windows 7 which is designed to be bootable on a USB or DVD for us geeks to play with. It is now being used for the Windows 8 Developer Preview by many developers for that for the same purpose it was built for in the past. But for now we will use it to put a NON Bootable ISO on a USB. Hey it does the job and I’m reusing a left over program. Why buy the fancy one or a free trial and clutter up my machine. We will click the BROWSE button and navigate to where we put our ISO file we want to put on the USB drive. Obviously we are going to click NEXT and continue to select a USB Device (you can guess what the DVD button is for). Next we select the USB that we have plugged into one of our laptops USB ports. Then we click the BEGIN COPYING button and the first thing the program does is format our USB drive. Then it starts copying out files out of the ISO and constructing the USB as if it was a DVD. So now that the files are copying to the drive I’m going to warn you. We will error out here. This program was design for bootable ISO’s of which this one is NOT. No problem because what fails it the writing of the bootable data to the drive that isn’t there. No biggie…. Forget the STARTOVER button is even there and click the dialog’s CLOSE button and exit the program. Now go to Windows Explorer and navigate to the USB Device. You can now access everything and even add stuff to the drive. But for me I want to keep this drive for one purpose and that is to install VS2010 on various machines. So the only stuff I’ll add to this is a folder of notes on things on visual studio that I might want to put on other machines I’m installing VS2010 on to. So that is it. Have a nice day! The Ron

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Upgrading to Oracle Enterprise Manager 12c Release 2: Top Tips One Must Know

    - by AnkurGupta
    Recently Oracle announced incremental release of Enterprise Manager 12c called Enterprise Manager 12c Release 2 (EM12c R2) which includes several new exciting features (Press announcement). Right before the official release, we upgraded an internal production site from EM 12c R1 to EM 12c R2 and had an extremely pleasant experience. Let me share few key takeaways as well as few tips from this upgrade exercise. I - Why Should You Upgrade To Enterprise Manager 12c Release 2 While an upgrade is usually recommended primarily to take benefit of the latest features (which is valid for this upgrade as well), I found several other compelling reasons purely from deployment perspective. Standardize your EM deployment:  Enterprise Manager comprises of several different components (OMS, agents, plug-ins, etc) and it might be possible that these are at varied patch levels in your environment. For instance, in case of an environment containing Bundle Patch 1 (customer announcement), there is a good chance that you may not have all the components up-to-date. There are two possible reasons. Bundle Patch 1 involved patching different components (OMS, agents, plug-ins) with multiple one-off patches which may not have been applied to all components yet. Bundle Patch 1 for different platforms were not released together. Which means you may not have got the chance to patch all the components on different platforms. Note: BP1 patches are not mandatory to upgrade to EM12c R2 release EM 12c R2 provides an excellent opportunity to standardize your Cloud Control environment (OMS, repository and agents) and plug-ins to latest versions in single shot. All platform releases are made available simultaneously: For the very first time in the history of EM release, all the platforms were released on day one itself, which means you do not need to wait for platform specific binaries for EM OMS or Agent to perform install or upgrades in a heterogeneous environment. Highly refined and automated process – Upgrade process is by far the smoothest and the cleanest as compared to previous releases of Enterprise manager. Following are the ones that stand out. Automatic Plug-in management – Plug-in upgrade along with new plug-in deployment is supported in upgrade installer wizard which means bulk of the updates to OMS and repository can be done in the same workflow. Saves time and minimizes user inputs. Plug-in Upgrade or Migrate Auto Update: While doing the OMS and repository upgrade, you can use Auto Update screen in Oracle Universal Installer to check for any updates/patches. That will help you to avoid the know issues and will make sure that your upgrade is successful. Allows mass upgrade of EM Agents – A new dedicated menu has been added in the EM console for agent upgrade. Agent upgrade workflow is extremely simple that requires agent name as the only input. ADM / JVMD Manager/Agent upgrade – complete process is supported via UI screens. EM12c R2 Upgrade Guide is much simpler to follow as compared to those for earlier releases. This is attributed to the simpler upgrade process. Robust and Performing Platform: EM12c R2 release not only includes several new features, but also provides a more stable platform which incorporates several fixes and enhancements in the Enterprise Manager framework. II - Few Tips To Remember In my last post (blog link) I shared few tips and tricks from my experience applying the Bundle Patch. Recently I upgraded the same site to EM 12c R2 and found few points that you must take note of, while planning this upgrade. The tips below are also applicable to EM 12c R1 environments that do not have Bundle Patch 1 patches applied. Verify the monitored application certification – Specific targets like E-Business Suite have not yet been certified as managed target in EM 12c R2. Therefore make sure to recheck the Enterprise Manager certification Matrix on My Oracle Support before planning the upgrade. Plan downtime – Because EM 12c R2 is an incremental release of EM 12c, for EM 12c R1 to EM 12c R2 upgrade supports only 1-system upgrade approach, which mean there will be downtime. OMS name change after upgrade – In case of multi OMS environments, additional OMS is renamed after upgrade, which has few implications when you upgrade JVMD and ADP agents on OMS. This is well documented in upgrade guide but make sure you read through all the notes. Upgrading BI Publisher– EM12c R2 is certified with BI Publisher 11.1.1.6.0 only. Therefore in case you are using EM 12c R1 which is integrated with BI Publisher 11.1.1.5.0, you must upgrade the BI Publisher to 11.1.1.6.0. Follow the steps from Advanced Installation and Configuration Guide here. Perform Post upgrade Tasks – Make sure to perform post upgrade steps mentioned in documentation here. These include critical changes that must be done right after upgrade to get the right configuration. For instance Database plug-in should be upgraded to Revision 3 (12.1.0.2.0 [u120804]). Delete old OMS Home – EM12c R1 to EM12c R2 is an out of place upgrade, which means it creates a new oracle home for OMS, plug-ins, etc. Therefore please ensure that You have sufficient extra space for new OMS before starting the upgrade process. You clean up the old OMS home after the upgrade process. Steps are available here. DO NOT remove the agent home on OMS host, because agent is upgraded in-place. If you have standby OMS setup then do look into the steps to upgrade the standby OMS from the upgrade guide before going ahead. Read the right documentation – Make sure to follow the Upgrade guide which provides the most comprehensive information on EM12c R2 upgrade process. Additionally you can refer other resources to get familiar with upgrade concepts. Recorded webcast - Oracle Enterprise Manager Cloud Control 12c Release 2 Installation and Upgrade Overview Presentation - Understanding Enterprise Manager 12.1.0.2 Upgrade We are very excited about this latest release and will look forward to hear back any feedback from your upgrade experience!

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Sound vs. Valid Argument

    - by MarkPearl
    Today I spent some time reviewing my Formal Logic course for my up coming exam. I came across a section that I have never really explored in any proper depth… the difference between a valid argument and a sound argument. Here go some notes I made… What is an argument? In this case we are not referring to a verbal fight, but more what we call a set of premise followed by a conclusion. Before we go further we need to understand what a premise is… a premise is a statement that an argument claims will induce or justify a conclusion. Think of a premise as an assumption that something is true. So, an argument can consist of one or more premises and a conclusion… When is an argument valid? An argument can be either valid or invalid. An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. It is generally easier to determine if an argument is invalid. Do this by applying the following… Assume that all the premises are true, then ask yourself if it is now possible for the conclusion to be false. If the answer is "yes," the argument is invalid. If it's "no," the argument is valid. Example 1… P1 – Mark is Tall P2 – Mark is a boy C –  Mark is a tall boy Walkthrough 1… Assume Mark is Tall is true and also assume that Mark is a boy. Based on these two premises, the conclusion is also true – Mark is a tall boy, thus the it is a valid argument. Let’s make this an invalid argument…   Example 2… P1 – Mark is Tall P2 – Mark is a boy C – Mark is a short boy Walkthrough 2… This would be an invalid argument, since from the premises we assume that Mark is tall and he is a boy, and then the conclusion goes against this by saying that Mark is short. Thus an invalid argument.   When is an argument sound? An argument is said to be sound when it is valid and all the premises are indeed true (not just assumed to be true). Rephrased, an argument is said to be sound when the conclusion will follow from the premises and the premises are indeed true in real life. In example 1 we were referring to a specific person, if we generalized it a bit we could come up with the following example.   Example 3 P1 – All people called Mark are tall P2 – I know a specific person called Mark C – He is a tall person   In this instance, it is a valid argument (we assume the premises are true, which leads to the conclusion being true), but the argument is NOT sound. In the real world there must be at least one person called Mark who is not tall. Something also to note, all invalid arguments are also unsound – this makes sense, if an argument is not valid, how on earth can it be true in the real world.   What happens when the premises contradict themselves? This is an interesting one… An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. When premises are contradictory, the argument is always valid because it is impossible for all the premises to be true at one time. Lets look at an example.. P1 - Elvis is dead P2 – Elvis is alive C – Laura is a woolly mammoth This is a valid argument, but not a sound one. Think about it. Is it possible to have a situation in which the premises are true and the conclusion is false? Sure, it's possible to have a situation in which the conclusion is false, but for the argument to be invalid, it has to be possible for the premises to all be true at the same time the conclusion is false. So if the premises can't all be true, the argument is valid. (If you still think the argument is invalid, draw a picture in which the premises are all true and the conclusion is false. Remember, there's only one Elvis, and you can't be both dead and alive.) For more info on this I suggest reading the following blog post.

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • Task-It Webinar - Source Code

    Last week I presented a webinar called "Building a real-world application with RadControls for Silverlight 4". For those that didn't get to see the webinar, you can view it here: Building a read-world application with RadControls for Silverlight 4 Since the webinar I've received several requests asking if I could post the source code for the simple application I showed demonstrating some of the techniques used in the development of Task-It, such as MVVM, Commands and Internationalization. This source code is now available for downloadhere. After downloading the source: Extract it to the location of your choice on your hard-drive Open the solution Right-click ModuleProject.Web and selecte 'Set as StartUp Project'. Right-click ProjectTestPage.aspx and selected 'Set as Start Page' Create a database in SQL Server called WebinarProject. Navigate to the Database folder under the WebinarProject directory and run the .sql script against your WebinarProject database. The last two steps are necessary only for the Tasks page to work properly (using WCF RIA Services). Now some notes about each page: Code-behind This is not the way I recommend coding a line-of-business application in Silverlight, but simply wanted to show how the code-behind approach would look. Command This page introduces MVVM and Commands. You'll notice in the XAML that the Command property of theRadMenuItem and the Button are both bound to a SaveCommand. That comes from the view model. If you look in the code- behind of the user control you'll see that an instance of a CommandViewModel is instantiated and set as the DataContext of the UserControl.There is also a listener for the view model's SaveCompleted event. When this is fired, it tells the view (UserControl) to display the MessageBox. Internationalization This sample is similar to the previous one, but instead of using hard-coded strings in the UI, the strings are obtained via binding toview model properties. The view model gets the strings from the .resx files (Strings.resx or Strings.de.resx) under Assets/Resources. If you uncomment the call to ShowGerman() in App.xaml.cs's Application_Startup method and re-run the application, you will see the UI in German. Note that this code, which sets the CurrentCulture and CurrentUICulture on the current thread to "de" (German) is for testing purposes only. RadWindow Once again, very similar to the previous example.The difference is that we are now using a RadWindow to display the 'Saved' message instead of a MessageBox. The advantage here is that we do not have to hold on to a reference to the view model in our code behind so that we can get the 'Saved' message from it. The RadWindow's DataContext is now also bound to the view model, so within its XAML we can bind directly to properties in the view model. Much nicer, and cleaner. One other thing I introduced in this example is the use of spacer Rectangles. Rather than setting a width and/or height on the rectangles for spacing, I am now referencing a style in my ResourceDictionary called StandardSpacerStyle. I like doing this better than using margins or padding because now I have a reusable way to create space between elements, the Rectangle does not show (because I have not set its Fill color), and I can change my spacing throughout the user interface in one place if I'd like. Tasks This page is quite a bit different than the other four. It is a very simple, stripped-down version of the Tasks page in the Task-It application. The Tasks.xaml UserControl has a ContentControl, and the Content of that control is set based on whether we are looking at the list of tasks or editing a task. So it displays one of two child UserControls, which are called List and Details. List has the RadGridView, Details has the form. In the code-behind of the Tasks UserControl I am once again setting its DataContext to a view model class. The nice thing is, whichever child UserControl is being displayed (List or Details) inherits its DataContext from its parent control (Tasks), so I do not have to explicitly set it. The List UserControl simply displays a RadGridView whose ItemsSource is bound to a property in the view model called Tasks, and its SelectedItem property is bound to a property in the view model called SelectedItem. The SelectedItem binding must be TwoWay so that the view is notified when the SelectedItem changes in the view model, and the view model is notified when something changes in the view (like when a user changes the Name and/or DueDate in the form). You'll also notice that the form's TextBox and RadDatePicker are also TwoWay bound to the SelectedItem property in the view model. You can experiment with the binding by removing TwoWay and see how changes in the form do not show up in the RadGridView. So here we have an example of two different views (List and Details) that are both bound to the same view model...and actually, so is the Tasks UserControl, so it is really three views. WCF RIA Services By the way, I am using WCF RIA Services to retrieve data for the RadGridView and save the data when the user clicks the Save button in the form. I created a really simple ADO.NET Entity Data Model in WebinarProject.Web called DataModel.edmx. I also created a simple Domain Data Service called DataService that has methods for retrieving data, inserting, updating and deleting. However I am only using the retrieval and update methods in this sample. Note that I do not currently have any validation in place on the form, as I wanted to keep the sample as simple as possible. Wrap up Technically, I should move the calls to WCF RIA Services out of the view model and put them into a separate layer, but this works for now, and that is a topic for another day! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • jQuery Selector Tester and Cheat Sheet

    - by SGWellens
    I've always appreciated these tools: Expresso and XPath Builder. They make designing regular expressions and XPath selectors almost fun! Did I say fun? I meant less painful. Being able to paste/load text and then interactively play with the search criteria is infinitely better than the code/compile/run/test cycle. It's faster and you get a much better feel for how the expressions work. So, I decided to make my own interactive tool to test jQuery selectors:  jQuery Selector Tester.   Here's a sneak peek: Note: There are some existing tools you may like better: http://www.woods.iki.fi/interactive-jquery-tester.html http://www.w3schools.com/jquery/trysel.asp?filename=trysel_basic&jqsel=p.intro,%23choose My tool is different: It is one page. You can save it and run it locally without a Web Server. It shows the results as a list of iterated objects instead of highlighted html. A cheat sheet is on the same page as the tester which is handy. I couldn't upload an .htm or .html file to this site so I hosted it on my personal site here: jQuery Selector Tester. Design Highlights: To make the interactive search work, I added a hidden div to the page: <!--Hidden div holds DOM elements for jQuery to search--><div id="HiddenDiv" style="display: none"></div> When ready to search, the searchable html text is copied into the hidden div…this renders the DOM tree in the hidden div: // get the html to search, insert it to the hidden divvar Html = $("#TextAreaHTML").val();$("#HiddenDiv").html(Html); When doing a search, I modify the search pattern to look only in the HiddenDiv. To do that, I put a space between the patterns.  The space is the Ancestor operator (see the Cheat Sheet): // modify search string to only search in our// hidden div and do the searchvar SearchString = "#HiddenDiv " + SearchPattern;try{    var $FoundItems = $(SearchString);}   Big Fat Stinking Faux Pas: I was about to publish this article when I made a big mistake: I tested the tool with Mozilla FireFox. It blowed up…it blowed up real good. In the past I’ve only had to target IE so this was quite a revelation. When I started to learn JavaScript, I was disgusted to see all the browser dependent code. Who wants to spend their time testing against different browsers and versions of browsers? Adding a bunch of ‘if-else’ code is a tedious and thankless task. I avoided client code as much as I could. Then jQuery came along and all was good. It was browser independent and freed us from the tedium of worrying about version N of the Acme browser. Right? Wrong! I had used outerHTML to display the selected elements. The problem is Mozilla FireFox doesn’t implement outerHTML. I replaced this: // encode the html markupvar OuterHtml = $('<div/>').text(this.outerHTML).html(); With this: // encode the html markupvar Html = $('<div>').append(this).html();var OuterHtml = $('<div/>').text(Html).html(); Another problem was that Mozilla FireFox doesn’t implement srcElement. I replaced this: var Row = e.srcElement.parentNode;  With this: var Row = e.target.parentNode; Another problem was the indexing. The browsers have different ways of indexing. I replaced this: // this cell has the search pattern  var Cell = Row.childNodes[1];   // put the pattern in the search box and search                    $("#TextSearchPattern").val(Cell.innerText);  With this: // get the correct cell and the text in the cell// place the text in the seach box and serachvar Cell = $(Row).find("TD:nth-child(2)");var CellText = Cell.text();$("#TextSearchPattern").val(CellText);   So much for the myth of browser independence. Was I overly optimistic and gullible? I don’t think so. And when I get my millions from the deposed Nigerian prince I sent money to, you’ll see that having faith is not futile. Notes: My goal was to have a single standalone file. I tried to keep the features and CSS to a minimum–adding only enough to make it useful and visually pleasing. When testing, I often thought there was a problem with the jQuery selector. Invariable it was invalid html code. If your results aren't what you expect, don't assume it's the jQuery selector pattern: The html may be invalid. To help in development and testing, I added a double-click handler to the rows in the Cheat Sheet table. If you double-click a row, the search pattern is put in the search box, a search is performed and the page is scrolled so you can see the results. I left the test html and code in the page. If you are using a CDN (non-local) version of the jQuery libraray, the designer in Visual Studio becomes extremely slow.  That's why there are two version of the library in the header and one is commented out. For reference, here is the jQuery documentation on selectors: http://api.jquery.com/category/selectors/ Here is a much more comprehensive list of CSS selectors (which jQuery uses): http://www.w3.org/TR/CSS2/selector.html I hope someone finds this useful. Steve WellensCodeProject

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >