Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 577/590 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Passing parameters between Silverlight and ASP.NET – Part 1

    - by mohanbrij
    While working with Silverlight applications, we may face some scenarios where we may need to embed Silverlight as a component, like for e.g in Sharepoint Webpars or simple we can have the same with ASP.NET. The biggest challenge comes when we have to pass the parameters from ASP.NET to Silverlight components or back from Silverlight to ASP.NET. We have lots of ways we can do this, like using InitParams, QueryStrings, using HTML objects in Silverlight, etc. All these different techniques have some advantages or disadvantages or limitations. Lets see one by one why we should choose one and what are the ways to achieve the same. 1. InitParams: Lets start with InitParams, Start your Visual Studio 2010 IDE, and Create a Silverlight Application, give any name. Now go to the ASP.NET WebProject which is used to Host the Silverlight XAP component. You will find lots of different tags are used by Silverlight object as <params> tags. To use InitParams, Silverlight provides us with a tag called InitParams which we can use to pass parameters to Silverlight object from ASP.NET. 1: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> 2: <param name="source" value="ClientBin/SilverlightApp.xap"/> 3: <param name="onError" value="onSilverlightError" /> 4: <param name="background" value="white" /> 5: <param name="minRuntimeVersion" value="4.0.50826.0" /> 6: <param name="initparams" id="initParams" runat="server" value=""/> 7: <param name="autoUpgrade" value="true" /> 8: <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> 9: <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> 10: </a> 11: </object> Here in the code above I have included a initParam as a param tag (line 6), now in the page load I will add a line 1: initParams.Attributes.Add("value", "key1=Brij, key2=Mohan"); This basically add a value parameter inside the initParam. So thats all we need in our ASP.NET side, now coming to the Silverlight Code open the code behind of App.xaml and add the following lines of code. 1: private string firstKey, secondKey; 2: private void Application_Startup(object sender, StartupEventArgs e) 3: { 4: if (e.InitParams.ContainsKey("key1")) 5: this.firstKey = e.InitParams["key1"]; 6: if (e.InitParams.ContainsKey("key2")) 7: this.secondKey = e.InitParams["key2"]; 8: this.RootVisual = new MainPage(firstKey, secondKey); 9: } This code fetch the init params and pass it to our MainPage.xaml constructor, in the MainPage.xaml we can use these variables according to our requirement, here in this example I am simply displaying the variables in a Message Box. 1: public MainPage(string param1, string param2) 2: { 3: InitializeComponent(); 4: MessageBox.Show("Welcome, " + param1 + " " + param2); 5: } This will give you a sample output as Limitations: Depending on the browsers you have some limitation on the overall string length of the parameters you can pass. To get more details on this limitation, you can refer to this link :http://www.boutell.com/newfaq/misc/urllength.html 2. QueryStrings To show this example I am taking the scenario where we have a default.aspx page and we are going to the SIlverlightTestPage.aspx, and we have to work with the parameters which was passed by default.aspx in the SilverlightTestPage.aspx Silverlight Component. So first I will add a new page in my application which contains a button with ID =btnNext, and on click of the button I will redirect my page to my SilverlightTestAppPage.aspx with the required query strings. Code of Default.aspx 1: protected void btnNext_Click(object sender, EventArgs e) 2: { 3: Response.Redirect("~/SilverlightAppTestPage.aspx?FName=Brij" + "&LName=Mohan"); 4: } Code of MainPage.xaml.cs 1: public partial class MainPage : UserControl 2: { 3: public MainPage() 4: { 5: InitializeComponent(); 6: this.Loaded += new RoutedEventHandler(MainPage_Loaded); 7: } 8: 9: void MainPage_Loaded(object sender, RoutedEventArgs e) 10: { 11: IDictionary<string, string> qString = HtmlPage.Document.QueryString; 12: string firstName = string.Empty; 13: string lastName = string.Empty; 14: foreach (KeyValuePair<string, string> keyValuePair in qString) 15: { 16: string key = keyValuePair.Key; 17: string value = keyValuePair.Value; 18: if (key == "FName") 19: firstName = value; 20: else if (key == "LName") 21: lastName = value; 22: } 23: MessageBox.Show("Welcome, " + firstName + " " + lastName); 24: } 25: } Set the Startup page as Default.aspx, now run the application. This will give you the following output: Since here also you are using the Query Strings to pass your parameters, so you are depending on the browser capabilities of the length of the query strings it can pass. Here also you can refer the limitation which I have mentioned in my previous example for the length of parameters you can use.   3. Using HtmlPage.Document Silverlight to ASP.NET <—> ASP.NET to Silverlight: To show this I setup a sample Silverlight Application with Buttons Get Data and Set Data with the Data Text Box. In ASP.NET page I kep a TextBox to Show how the values passed to and From Silverlight to ASP.NET reflects back. My page with Silverlight control looks like this. When I Say Get Data it pulls the data from ASP.NET to Silverlight Control Text Box, and When I say Set data it basically Set the Value from Silverlight Control TextBox to ASP.NET TextBox. Now let see the code how it is doing. This is my ASP.NET Source Code. Here I have just created a TextBox named : txtData 1: <body> 2: <form id="form1" runat="server" style="height:100%"> 3: <div id="silverlightControlHost"> 4: ASP.NET TextBox: <input type="text" runat="server" id="txtData" value="Some Data" /> 5: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> 6: <param name="source" value="ClientBin/SilverlightApplication1.xap"/> 7: <param name="onError" value="onSilverlightError" /> 8: <param name="background" value="white" /> 9: <param name="minRuntimeVersion" value="4.0.50826.0" /> 10: <param name="autoUpgrade" value="true" /> 11: <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> 12: <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> 13: </a> 14: </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe> 15: </div> 16: </form> 17: </body> My actual logic for getting and setting the data lies in my Silverlight Control, this is my XAML code with TextBox and Buttons. 1: <Grid x:Name="LayoutRoot" Background="White" Height="100" Width="450" VerticalAlignment="Top"> 2: <Grid.ColumnDefinitions> 3: <ColumnDefinition Width="110" /> 4: <ColumnDefinition Width="110" /> 5: <ColumnDefinition Width="110" /> 6: <ColumnDefinition Width="110" /> 7: </Grid.ColumnDefinitions> 8: <TextBlock Text="Silverlight Text Box: " Grid.Column="0" VerticalAlignment="Center"></TextBlock> 9: <TextBox x:Name="DataText" Width="100" Grid.Column="1" Height="20"></TextBox> 10: <Button x:Name="GetData" Width="100" Click="GetData_Click" Grid.Column="2" Height="30" Content="Get Data"></Button> 11: <Button x:Name="SetData" Width="100" Click="SetData_Click" Grid.Column="3" Height="30" Content="Set Data"></Button> 12: </Grid> Now we have to write few lines of Button Events for Get Data and Set Data which basically make use of Windows.System.Browser namespace. 1: private void GetData_Click(object sender, RoutedEventArgs e) 2: { 3: DataText.Text = HtmlPage.Document.GetElementById("txtData").GetProperty("value").ToString(); 4: } 5:  6: private void SetData_Click(object sender, RoutedEventArgs e) 7: { 8: HtmlPage.Document.GetElementById("txtData").SetProperty("value", DataText.Text); 9: } That’s it so when we run this application my Form will look like this. 4. Using Object Serialization. This is a useful when we want to pass Objects of Data from our ASP.NET application to Silverlight Controls and back. This technique basically uses the above technique I mentioned in Pint 3 above. Since this itself is a length topic so details of this I am going to cover in Part 2 of this Post with Sample Code Example very soon.

    Read the article

  • Can't connect to WCF service on Android

    - by illvm
    I am trying to connect to a .NET WCF service in Android using kSOAP2 (v 2.1.2), but I keep getting a fatal exception whenever I try to make the service call. I'm having a bit of difficulty tracking down the error and can't seem to figure out why it's happening. The code I am using is below: package org.example.android; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransport; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; public class ValidateUser extends Activity { private static final String SOAP_ACTION = "http://tempuri.org/mobile/ValidateUser"; private static final String METHOD_NAME = "ValidateUser"; private static final String NAMESPACE = "http://tempuri.org/mobile/"; private static final String URL = "http://192.168.1.2:8002/WebService.Mobile.svc"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.validate_user); Intent intent = getIntent(); Bundle extras = intent.getExtras(); if (extras == null) { this.finish(); } String username = extras.getString("username"); String password = extras.getString("password"); Boolean validUser = false; try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); PropertyInfo uName = new PropertyInfo(); uName.name = "userName"; PropertyInfo pWord = new PropertyInfo(); pWord.name = "passWord"; request.addProperty(uName, username); request.addProperty(pWord, password); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransport androidHttpTransport = new HttpTransport(URL); androidHttpTransport.call(SOAP_ACTION, envelope); // error occurs here Integer userId = (Integer)envelope.getResponse(); validUser = (userId != 0); } catch (Exception ex) { } } private void exit () { this.finish(); } } EDIT: Remove the old stack traces. In summary, the first problem was the program being unable to open a connection due to missing methods or libraries due to using vanilla kSOAP2 rather than a modified library for Android (kSOAP2-Android). The second issue was a settings issue. In the Manifest I did not add the following setting: <uses-permission android:name="android.permission.INTERNET" /> I am now having an issue with the XMLPullParser which I need to figure out. 12-23 10:58:06.480: ERROR/SOCKETLOG(210): add_recv_stats recv 0 12-23 10:58:06.710: WARN/System.err(210): org.xmlpull.v1.XmlPullParserException: unexpected type (position:END_DOCUMENT null@1:0 in java.io.InputStreamReader@433fb070) 12-23 10:58:06.710: WARN/System.err(210): at org.kxml2.io.KXmlParser.exception(KXmlParser.java:243) 12-23 10:58:06.720: WARN/System.err(210): at org.kxml2.io.KXmlParser.nextTag(KXmlParser.java:1363) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.SoapEnvelope.parse(SoapEnvelope.java:126) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.Transport.parseResponse(Transport.java:63) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:100) 12-23 10:58:06.730: WARN/System.err(210): at org.example.android.ValidateUser.onCreate(ValidateUser.java:68) 12-23 10:58:06.730: WARN/System.err(210): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1122) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2104) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2157) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.access$1800(ActivityThread.java:112) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1581) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Handler.dispatchMessage(Handler.java:88) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Looper.loop(Looper.java:123) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.main(ActivityThread.java:3739) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invokeNative(Native Method) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invoke(Method.java:515) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) 12-23 10:58:06.730: WARN/System.err(210): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET TextBox TextChanged event not firing in custom EditorPart

    - by Ben Collins
    This is a classic sort of question, I suppose, but it seems that most people are interested in having the textbox cause a postback. I'm not. I just want the event to fire when a postback occurs. I have created a webpart with a custom editorpart. The editorpart renders with a textbox and a button. Clicking the button causes a dialog to open. When the dialog is closed, it sets the value of the textbox via javascript and then does __doPostBack using the ClientID of the editorpart. The postback happens, but the TextChanged event never fires, and I'm not sure if it's a problem with the way __doPostBack is invoked, or if it's because of the way I'm setting up the event handler, or something else. Here's what I think is the relevant portion of the code from the editorpart: protected override void CreateChildControls() { _txtListUrl = new TextBox(); _txtListUrl.ID = "targetSPList"; _txtListUrl.Style.Add(HtmlTextWriterStyle.Width, "60%"); _txtListUrl.ToolTip = "Select List"; _txtListUrl.CssClass = "ms-input"; _txtListUrl.Attributes.Add("readOnly", "true"); _txtListUrl.Attributes.Add("onChange", "__doPostBack('" + this.ClientID + "', '');"); _txtListUrl.Text = this.ListString; _btnListPicker = new HtmlInputButton(); _btnListPicker.Style.Add(HtmlTextWriterStyle.Width, "60%"); _btnListPicker.Attributes.Add("Title", "Select List"); _btnListPicker.ID = "browseListsSmtButton"; _btnListPicker.Attributes.Add("onClick", "mso_launchListSmtPicker()"); _btnListPicker.Value = "Select List"; this.AddConfigurationOption("News List", "Choose the list that serves as the data source.", new Control[] { _txtListUrl, _btnListPicker }); if (this.ShowViewSelection) { _txtListUrl.TextChanged += new EventHandler(_txtListUrl_TextChanged); _ddlViews = new DropDownList(); _ddlViews.ID = "_ddlViews"; this.AddConfigurationOption("View", _ddlViews); } } protected override void OnPreRender(EventArgs e) { ScriptLink.Register(this.Page, "PickerTreeDialog.js", true); string lastSelectedListId = string.Empty; if (!this.WebId.Equals(Guid.Empty) && !this.ListId.Equals(Guid.Empty)) { lastSelectedListId = SPHttpUtility.EcmaScriptStringLiteralEncode( string.Format("SPList:{0}?SPWeb:{1}:", this.ListId.ToString(), this.WebId.ToString())); } string script = "\r\n var lastSelectedListSmtPickerId = '" + lastSelectedListId + "';" + "\r\n function mso_launchListSmtPicker(){" + "\r\n if (!document.getElementById) return;" + "\r\n" + "\r\n var listTextBox = document.getElementById('" + SPHttpUtility.EcmaScriptStringLiteralEncode(_txtListUrl.ClientID) + "');" + "\r\n if (listTextBox == null) return;" + "\r\n" + "\r\n var serverUrl = '" + SPHttpUtility.EcmaScriptStringLiteralEncode(SPContext.Current.Web.ServerRelativeUrl) + "';" + "\r\n" + "\r\n var callback = function(results) {" + "\r\n if (results == null || results[1] == null || results[2] == null) return;" + "\r\n" + "\r\n lastSelectedListSmtPickerId = results[0];" + "\r\n var listUrl = '';" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[1].charAt(0) == '/') results[1] = results[1].substring(1);" + "\r\n listUrl = listUrl + results[1];" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[2].charAt(0) == '/') results[2] = results[2].substring(1);" + "\r\n listUrl = listUrl + results[2];" + "\r\n listTextBox.value = listUrl;" + "\r\n __doPostBack('" + this.ClientID + "','');" + "\r\n }" + "\r\n LaunchPickerTreeDialog('CbqPickerSelectListTitle','CbqPickerSelectListText','websLists','', serverUrl, lastSelectedListSmtPickerId,'','','/_layouts/images/smt_icon.gif','', callback);" + "\r\n }"; this.Page.ClientScript.RegisterClientScriptBlock(typeof(ListPickerEditorPart), "mso_launchListSmtPicker", script, true); if ((!string.IsNullOrEmpty(_txtListUrl.Text) && _ddlViews.Items.Count == 0) || _listSelectionChanged) { _ddlViews.Items.Clear(); if (!string.IsNullOrEmpty(_txtListUrl.Text)) { using (SPWeb web = SPContext.Current.Site.OpenWeb(this.WebId)) { foreach (SPView view in web.Lists[this.ListId].Views) { _ddlViews.Items.Add(new ListItem(view.Title, view.ID.ToString())); } } _ddlViews.Enabled = _ddlViews.Items.Count > 0; } else { _ddlViews.Enabled = false; } } base.OnPreRender(e); } void _txtListUrl_TextChanged(object sender, EventArgs e) { this.SetPropertiesFromChosenListString(_txtListUrl.Text); _listSelectionChanged = true; } Any ideas? Update: I forgot to mention these methods, which are called above: protected virtual void AddConfigurationOption(string title, Control inputControl) { this.AddConfigurationOption(title, null, inputControl); } protected virtual void AddConfigurationOption(string title, string description, Control inputControl) { this.AddConfigurationOption(title, description, new List<Control>(new Control[] { inputControl })); } protected virtual void AddConfigurationOption(string title, string description, IEnumerable<Control> inputControls) { HtmlGenericControl divSectionHead = new HtmlGenericControl("div"); divSectionHead.Attributes.Add("class", "UserSectionHead"); this.Controls.Add(divSectionHead); HtmlGenericControl labTitle = new HtmlGenericControl("label"); labTitle.InnerHtml = HttpUtility.HtmlEncode(title); divSectionHead.Controls.Add(labTitle); HtmlGenericControl divUserSectionBody = new HtmlGenericControl("div"); divUserSectionBody.Attributes.Add("class", "UserSectionBody"); this.Controls.Add(divUserSectionBody); HtmlGenericControl divUserControlGroup = new HtmlGenericControl("div"); divUserControlGroup.Attributes.Add("class", "UserControlGroup"); divUserSectionBody.Controls.Add(divUserControlGroup); if (!string.IsNullOrEmpty(description)) { HtmlGenericControl spnDescription = new HtmlGenericControl("div"); spnDescription.InnerHtml = HttpUtility.HtmlEncode(description); divUserControlGroup.Controls.Add(spnDescription); } foreach (Control inputControl in inputControls) { divUserControlGroup.Controls.Add(inputControl); } this.Controls.Add(divUserControlGroup); HtmlGenericControl divUserDottedLine = new HtmlGenericControl("div"); divUserDottedLine.Attributes.Add("class", "UserDottedLine"); divUserDottedLine.Style.Add(HtmlTextWriterStyle.Width, "100%"); this.Controls.Add(divUserDottedLine); }

    Read the article

  • Problem Using POI To Set CellStyleProperty With HSSFCellUtil

    - by Alvin Sim
    I have a Java class which uses Apache POI to generate reports in Excel. When I run the Java class from my IDE or command prompt, I only see warning messages from LOG4J as below: log4j:WARN No appenders could be found for logger (org.apache.commons.beanutils.converters.BooleanConverter). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Despite the warning messages, the report was generated successfully. But when I run it from my web app, which uses JSP and submits the form to a Servlet which calls the Java class, the Java class seems to have problems setting the style properties to the cell. Below are the Java code and also the stack trace. I'm testing this on a Standalone OC4J and the IDE which I'm using is Oracle's JDeveloper. And the Java JDK is 1.4.2. I've been looking high and low the whole day yesterday but can't seem to find out why. Code: region = new Region(1, (short) 1, 5, (short)2); sheet.addMergedRegion(region); HSSFRegionUtil.setBorderBottom( (short) 1, region, sheet, workBook ); Stack trace: 10/06/07 16:03:17 SvltRptProcessor ACTION=print_to_file RPT_CLASSNAME=com.reports.BP.DailySalesBudgetExcelRpt DES_TYPE=file DES_FORMAT=xls 10/06/07 16:03:17 rptFilename=/oracle/reports//20100607_160317_BP_DailySalesBudgetByPmgrp_OPR.xls 10/06/07 16:03:17 ReportRunner printToFile execute -> com.reports.BP.DailySalesBudgetExcelRpt 10/06/07 16:03:17 enter daily sales budget excel rpt -----> print() 10/06/07 16:03:18 Tutalii: C:\oc4j10gmy\j2ee\home\applib\poi-2.5.1.jar archive 10/06/07 16:03:19 org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:509) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:285) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:255) 10/06/07 16:03:19 at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:381) 10/06/07 16:03:19 at org.apache.commons.beanutils.ConvertUtilsBean.<init>(ConvertUtilsBean.java:157) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean.<init>(BeanUtilsBean.java:117) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean$1.initialValue(BeanUtilsBean.java:68) 10/06/07 16:03:19 at org.apache.commons.beanutils.ContextClassLoaderLocal.get(ContextClassLoaderLocal.java:153) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean.getInstance(BeanUtilsBean.java:80) 10/06/07 16:03:19 at org.apache.commons.beanutils.PropertyUtilsBean.getInstance(PropertyUtilsBean.java:114) 10/06/07 16:03:19 at org.apache.commons.beanutils.PropertyUtils.describe(PropertyUtils.java:209) 10/06/07 16:03:19 at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:174) 10/06/07 16:03:19 at org.apache.poi.hssf.usermodel.contrib.HSSFRegionUtil. setBorderBottom(HSSFRegionUtil.java:153) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.setRegion(DailySalesBudgetExcelRpt.java:773) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.createHdr(DailySalesBudgetExcelRpt.java:308) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.start(DailySalesBudgetExcelRpt.java:272) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.print(DailySalesBudgetExcelRpt.java:222) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.printToFile(ReportRunner.java:601) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.doPrint(ReportRunner.java:302) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.run(ReportRunner.java:270) 10/06/07 16:03:19 at java.lang.Thread.run(Thread.java:619) 10/06/07 16:03:19 Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:420) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:502) 10/06/07 16:03:19 ... 20 more 10/06/07 16:03:19 Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category 10/06/07 16:03:19 at java.lang.Class.getDeclaredConstructors0(Native Method) 10/06/07 16:03:19 at java.lang.Class.privateGetDeclaredConstructors(Class. java:2389) 10/06/07 16:03:19 at java.lang.Class.getConstructor0(Class.java:2699) 10/06/07 16:03:19 at java.lang.Class.getConstructor(Class.java:1657) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:417) 10/06/07 16:03:19 ... 21 more 10/06/07 16:03:19 Caused by: java.lang.ClassNotFoundException: org.apache.log4j. Category 10/06/07 16:03:19 at java.net.URLClassLoader$1.run(URLClassLoader.java:202 ) 10/06/07 16:03:19 at java.security.AccessController.doPrivileged(Native Method) 10/06/07 16:03:19 at java.net.URLClassLoader.findClass(URLClassLoader.java :190) 10/06/07 16:03:19 at java.lang.ClassLoader.loadClass(ClassLoader.java:307) 10/06/07 16:03:19 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) 10/06/07 16:03:19 at java.lang.ClassLoader.loadClass(ClassLoader.java:248) 10/06/07 16:03:19 ... 26 more org.apache.commons.lang.exception.NestableException: Couldn't setCellStyleProperty. at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:209) at org.apache.poi.hssf.usermodel.contrib.HSSFRegionUtil.setBorderBottom(HSSFRegionUtil.java:153) at com.reports.BP.DailySalesBudgetExcelRpt.setRegion(DailySalesBudgetExcelRpt.java:773) at com.reports.BP.DailySalesBudgetExcelRpt.createHdr(DailySalesBudgetExcelRpt.java:308) at com.reports.BP.DailySalesBudgetExcelRpt.start(DailySalesBudgetExcelRpt.java:272) at com.reports.BP.DailySalesBudgetExcelRpt.print(DailySalesBudgetExcelRpt.java:222) at com.servlet.RPT.ReportRunner.printToFile(ReportRunner.java:601) at com.servlet.RPT.ReportRunner.doPrint(ReportRunner.java:302) at com.servlet.RPT.ReportRunner.run(ReportRunner.java:270) at java.lang.Thread.run(Thread.java:619) Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:509) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:285) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:255) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:381) at org.apache.commons.beanutils.ConvertUtilsBean.<init>(ConvertUtilsBean.java:157) at org.apache.commons.beanutils.BeanUtilsBean.<init>(BeanUtilsBean.java:117) at org.apache.commons.beanutils.BeanUtilsBean$1.initialValue(BeanUtilsBean.java:68) at org.apache.commons.beanutils.ContextClassLoaderLocal.get(ContextClassLoaderLocal.java:153) at org.apache.commons.beanutils.BeanUtilsBean.getInstance(BeanUtilsBean.java:80) at org.apache.commons.beanutils.PropertyUtilsBean.getInstance(PropertyUtilsBean.java:114) at org.apache.commons.beanutils.PropertyUtils.describe(PropertyUtils.java:209) at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:174) ... 9 more Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:420) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactory Impl.java:502) ... 20 more Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.getConstructor(Class.java:1657) at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:417) ... 21 more Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Category at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 26 more

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • WPF Blurry Images - Bitmap Class

    - by Luke
    I am using the following sample at http://blogs.msdn.com/dwayneneed/archive/2007/10/05/blurry-bitmaps.aspx within VB.NET. The code is shown below. I am having a problem when my application loads the CPU is pegging 50-70%. I have determined that the problem is with the Bitmap class. The OnLayoutUpdated() method is calling the InvalidateVisual() continously. This is because some points are not returning as equal but rather, Point(0.0,-0.5) Can anyone see any bugs within this code or know a better implmentation for pixel snapping a Bitmap image so it is not blurry? p.s. The sample code was in C#, however I believe that it was converted correctly. Imports System Imports System.Collections.Generic Imports System.Windows Imports System.Windows.Media Imports System.Windows.Media.Imaging Class Bitmap Inherits FrameworkElement ' Use FrameworkElement instead of UIElement so Data Binding works as expected Private _sourceDownloaded As EventHandler Private _sourceFailed As EventHandler(Of ExceptionEventArgs) Private _pixelOffset As Windows.Point Public Sub New() _sourceDownloaded = New EventHandler(AddressOf OnSourceDownloaded) _sourceFailed = New EventHandler(Of ExceptionEventArgs)(AddressOf OnSourceFailed) AddHandler LayoutUpdated, AddressOf OnLayoutUpdated End Sub Public Shared ReadOnly SourceProperty As DependencyProperty = DependencyProperty.Register("Source", GetType(BitmapSource), GetType(Bitmap), New FrameworkPropertyMetadata(Nothing, FrameworkPropertyMetadataOptions.AffectsRender Or FrameworkPropertyMetadataOptions.AffectsMeasure, New PropertyChangedCallback(AddressOf Bitmap.OnSourceChanged))) Public Property Source() As BitmapSource Get Return DirectCast(GetValue(SourceProperty), BitmapSource) End Get Set(ByVal value As BitmapSource) SetValue(SourceProperty, value) End Set End Property Public Shared Function FindParentWindow(ByVal child As DependencyObject) As Window Dim parent As DependencyObject = VisualTreeHelper.GetParent(child) 'Check if this is the end of the tree If parent Is Nothing Then Return Nothing End If Dim parentWindow As Window = TryCast(parent, Window) If parentWindow IsNot Nothing Then Return parentWindow Else ' Use recursion until it reaches a Window Return FindParentWindow(parent) End If End Function Public Event BitmapFailed As EventHandler(Of ExceptionEventArgs) ' Return our measure size to be the size needed to display the bitmap pixels. ' ' Use MeasureOverride instead of MeasureCore so Data Binding works as expected. ' Protected Overloads Overrides Function MeasureCore(ByVal availableSize As Size) As Size Protected Overloads Overrides Function MeasureOverride(ByVal availableSize As Size) As Size Dim measureSize As New Size() Dim bitmapSource As BitmapSource = Source If bitmapSource IsNot Nothing Then Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If Me.VisualParent IsNot Nothing Then Dim window As Window = window.GetWindow(Me.VisualParent) If window IsNot Nothing Then ps = PresentationSource.FromVisual(window.GetWindow(Me.VisualParent)) ElseIf FindParentWindow(Me) IsNot Nothing Then ps = PresentationSource.FromVisual(FindParentWindow(Me)) End If End If ' If ps IsNot Nothing Then Dim fromDevice As Matrix = ps.CompositionTarget.TransformFromDevice Dim pixelSize As New Vector(bitmapSource.PixelWidth, bitmapSource.PixelHeight) Dim measureSizeV As Vector = fromDevice.Transform(pixelSize) measureSize = New Size(measureSizeV.X, measureSizeV.Y) Else measureSize = New Size(bitmapSource.PixelWidth, bitmapSource.PixelHeight) End If End If Return measureSize End Function Protected Overloads Overrides Sub OnRender(ByVal dc As DrawingContext) Dim bitmapSource As BitmapSource = Me.Source If bitmapSource IsNot Nothing Then _pixelOffset = GetPixelOffset() ' Render the bitmap offset by the needed amount to align to pixels. dc.DrawImage(bitmapSource, New Rect(_pixelOffset, DesiredSize)) End If End Sub Private Shared Sub OnSourceChanged(ByVal d As DependencyObject, ByVal e As DependencyPropertyChangedEventArgs) Dim bitmap As Bitmap = DirectCast(d, Bitmap) Dim oldValue As BitmapSource = DirectCast(e.OldValue, BitmapSource) Dim newValue As BitmapSource = DirectCast(e.NewValue, BitmapSource) If ((oldValue IsNot Nothing) AndAlso (bitmap._sourceDownloaded IsNot Nothing)) AndAlso (Not oldValue.IsFrozen AndAlso (TypeOf oldValue Is BitmapSource)) Then RemoveHandler DirectCast(oldValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded RemoveHandler DirectCast(oldValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed -= bitmap._sourceFailed; // 3.5 End If If ((newValue IsNot Nothing) AndAlso (TypeOf newValue Is BitmapSource)) AndAlso Not newValue.IsFrozen Then AddHandler DirectCast(newValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded AddHandler DirectCast(newValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed += bitmap._sourceFailed; // 3.5 End If End Sub Private Sub OnSourceDownloaded(ByVal sender As Object, ByVal e As EventArgs) InvalidateMeasure() InvalidateVisual() End Sub Private Sub OnSourceFailed(ByVal sender As Object, ByVal e As ExceptionEventArgs) Source = Nothing ' setting a local value seems scetchy... RaiseEvent BitmapFailed(Me, e) End Sub Private Sub OnLayoutUpdated(ByVal sender As Object, ByVal e As EventArgs) ' This event just means that layout happened somewhere. However, this is ' what we need since layout anywhere could affect our pixel positioning. Dim pixelOffset As Windows.Point = GetPixelOffset() If Not AreClose(pixelOffset, _pixelOffset) Then InvalidateVisual() End If End Sub ' Gets the matrix that will convert a Windows.Point from "above" the ' coordinate space of a visual into the the coordinate space ' "below" the visual. Private Function GetVisualTransform(ByVal v As Visual) As Matrix If v IsNot Nothing Then Dim m As Matrix = Matrix.Identity Dim transform As Transform = VisualTreeHelper.GetTransform(v) If transform IsNot Nothing Then Dim cm As Matrix = transform.Value m = Matrix.Multiply(m, cm) End If Dim offset As Vector = VisualTreeHelper.GetOffset(v) m.Translate(offset.X, offset.Y) Return m End If Return Matrix.Identity End Function Private Function TryApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean, ByVal throwOnError As Boolean, ByRef success As Boolean) As Windows.Point success = True If v IsNot Nothing Then Dim visualTransform As Matrix = GetVisualTransform(v) If inverse Then If Not throwOnError AndAlso Not visualTransform.HasInverse Then success = False Return New Windows.Point(0, 0) End If visualTransform.Invert() End If Point = visualTransform.Transform(Point) End If Return Point End Function Private Function ApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean) As Windows.Point Dim success As Boolean = True Return TryApplyVisualTransform(Point, v, inverse, True, success) End Function Private Function GetPixelOffset() As Windows.Point Dim pixelOffset As New Windows.Point() Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If ps IsNot Nothing Then Dim rootVisual As Visual = ps.RootVisual ' Transform (0,0) from this element up to pixels. pixelOffset = Me.TransformToAncestor(rootVisual).Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, False) pixelOffset = ps.CompositionTarget.TransformToDevice.Transform(pixelOffset) ' Round the origin to the nearest whole pixel. pixelOffset.X = Math.Round(pixelOffset.X) pixelOffset.Y = Math.Round(pixelOffset.Y) ' Transform the whole-pixel back to this element. pixelOffset = ps.CompositionTarget.TransformFromDevice.Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, True) pixelOffset = rootVisual.TransformToDescendant(Me).Transform(pixelOffset) End If Return pixelOffset End Function Private Function AreClose(ByVal Point1 As Windows.Point, ByVal Point2 As Windows.Point) As Boolean Return AreClose(Point1.X, Point2.X) AndAlso AreClose(Point1.Y, Point2.Y) End Function Private Function AreClose(ByVal value1 As Double, ByVal value2 As Double) As Boolean If value1 = value2 Then Return True End If Dim delta As Double = value1 - value2 Return ((delta < 0.00000153) AndAlso (delta > -0.00000153)) End Function End Class

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Solaris 11.1: Changes to included FOSS packages

    - by alanc
    Besides the documentation changes I mentioned last time, another place you can see Solaris 11.1 changes before upgrading is in the online package repository, now that the 11.1 packages have been published to http://pkg.oracle.com/solaris/release/, as the “0.175.1.0.0.24.2” branch. (Oracle Solaris Package Versioning explains what each field in that version string means.) When you’re ready to upgrade to the packages from either this repo, or the support repository, you’ll want to first read How to Update to Oracle Solaris 11.1 Using the Image Packaging System by Pete Dennis, as there are a couple issues you will need to be aware of to do that upgrade, several of which are due to changes in the Free and Open Source Software (FOSS) packages included with Solaris, as I’ll explain in a bit. Solaris 11 can update more readily than Solaris 10 In the Solaris 10 and older update models, the way the updates were built constrained what changes we could make in those releases. To change an existing SVR4 package in those releases, we created a Solaris Patch, which applied to a given version of the SVR4 package and replaced, added or deleted files in it. These patches were released via the support websites (originally SunSolve, now My Oracle Support) for applying to existing Solaris 10 installations, and were also merged into the install images for the next Solaris 10 update release. (This Solaris Patches blog post from Gerry Haskins dives deeper into that subject.) Some of the restrictions of this model were that package refactoring, changes to package dependencies, and even just changing the package version number, were difficult to do in this hybrid patch/OS update model. For instance, when Solaris 10 first shipped, it had the Xorg server from X11R6.8. Over the first couple years of update releases we were able to keep it up to date by replacing, adding, & removing files as necessary, taking it all the way up to Xorg server release 1.3 (new version numbering begun after the X11R7 split of the X11 tree into separate modules gave each module its own version). But if you run pkginfo on the SUNWxorg-server package, you’ll see it still displayed a version number of 6.8, confusing users as to which version was actually included. We stopped upgrading the Xorg server releases in Solaris 10 after 1.3, as later versions added new dependencies, such as HAL, D-Bus, and libpciaccess, which were very difficult to manage in this patching model. (We later got libpciaccess to work, but HAL & D-Bus would have been much harder due to the greater dependency tree underneath those.) Similarly, every time the GNOME team looked into upgrading Solaris 10 past GNOME 2.6, they found these constraints made it so difficult it wasn’t worthwhile, and eventually GNOME’s dependencies had changed enough it was completely infeasible. Fortunately, this worked out for both the X11 & GNOME teams, with our management making the business decision to concentrate on the “Nevada” branch for desktop users - first as Solaris Express Desktop Edition, and later as OpenSolaris, so we didn’t have to fight to try to make the package updates fit into these tight constraints. Meanwhile, the team designing the new packaging system for Solaris 11 was seeing us struggle with these problems, and making this much easier to manage for both the development teams and our users was one of their big goals for the IPS design they were working on. Now that we’ve reached the first update release to Solaris 11, we can start to see the fruits of their labors, with more FOSS updates in 11.1 than we had in many Solaris 10 update releases, keeping software more up to date with the upstream communities. Of course, just because we can more easily update now, doesn’t always mean we should or will do so, it just removes the package system limitations from forcing the decision for us. So while we’ve upgraded the X Window System in the 11.1 release from X11R7.6 to 7.7, the Solaris GNOME team decided it was not the right time to try to make the jump from GNOME 2 to GNOME 3, though they did update some individual components of the desktop, especially those with security fixes like Firefox. In other parts of the system, decisions as to what to update were prioritized based on how they affected other projects, or what customer requests we’d gotten for them. So with all that background in place, what packages did we actually update or add between Solaris 11.0 and 11.1? Core OS Functionality One of the FOSS changes with the biggest impact in this release is the upgrade from Grub Legacy (0.97) to Grub 2 (1.99) for the x64 platform boot loader. This is the cause of one of the upgrade quirks, since to go from Solaris 11.0 to 11.1 on x64 systems, you first need to update the Boot Environment tools (such as beadm) to a new version that can handle boot environments that use the Grub2 boot loader. System administrators can find the details they need to know about the new Grub in the Administering the GRand Unified Bootloader chapter of the Booting and Shutting Down Oracle Solaris 11.1 Systems guide. This change was necessary to be able to support new hardware coming into the x64 marketplace, including systems using UEFI firmware or booting off disk drives larger than 2 terabytes. For both platforms, Solaris 11.1 adds rsyslog as an optional alternative to the traditional syslogd, and OpenSCAP for checking security configuration settings are compliant with site policies. Note that the support repo actually has newer versions of BIND & fetchmail than the 11.1 release, as some late breaking critical fixes came through from the community upstream releases after the Solaris 11.1 release was frozen, and made their way to the support repository. These are responsible for the other big upgrade quirk in this release, in which to upgrade a system which already installed those versions from the support repo, you need to either wait for those packages to make their way to the 11.1 branch of the support repo, or follow the steps in the aforementioned upgrade walkthrough to let the package system know it's okay to temporarily downgrade those. Developer Stack While Solaris 11.0 included Python 2.7, many of the bundled python modules weren’t packaged for it yet, limiting its usability. For 11.1, many more of the python modules include 2.7 versions (enough that I filtered them out of the below table, but you can always search on the package repository server for them. For other language runtimes and development tools, 11.1 expands the use of IPS mediated links to choose which version of a package is the default when the packages are designed to allow multiple versions to install side by side. For instance, in Solaris 11.0, GNU automake 1.9 and 1.10 were provided, and developers had to run them as either automake-1.9 or automake-1.10. In Solaris 11.1, when automake 1.11 was added, also added was a /usr/bin/automake mediated link, which points to the automake-1.11 program by default, but can be changed to another version by running the pkg set-mediator command. Mediated links were also used for the Java runtime & development kits in 11.1, changing the default versions to the Java 7 releases (the 1.7.0.x package versions), while allowing admins to switch links such as /usr/bin/javac back to Java 6 if they need to for their site, to deal with Java 7 compatibility or other issues, without having to update each usage to use the full versioned /usr/jdk/jdk1.6.0_35/bin/javac paths for every invocation. Desktop Stack As I mentioned before, we upgraded from X11R7.6 to X11R7.7, since a pleasant coincidence made the X.Org release dates line up nicely with our feature & code freeze dates for this release. (Or perhaps it wasn’t so coincidental, after all, one of the benefits of being the person making the release is being able to decide what schedule is most convenient for you, and this one worked well for me.) For the table below, I’ve skipped listing the packages in which we use the X11 “katamari” version for the Solaris package version (mainly packages combining elements of multiple upstream modules with independent version numbers), since they just all changed from 7.6 to 7.7. In the graphics drivers, we worked with Intel to update the Intel Integrated Graphics Processor support to support 3D graphics and kernel mode setting on the Ivy Bridge chipsets, and updated Nvidia’s non-FOSS graphics driver from 280.13 to 295.20. Higher up in the desktop stack, PulseAudio was added for audio support, and liblouis for Braille support, and the GNOME applications were built to use them. The Mozilla applications, Firefox & Thunderbird moved to the current Extended Support Release (ESR) versions, 10.x for each, to bring up-to-date security fixes without having to be on Mozilla’s agressive 6 week feature cycle release train. Detailed list of changes This table shows most of the changes to the FOSS packages between Solaris 11.0 and 11.1. As noted above, some were excluded for clarity, or to reduce noise and duplication. All the FOSS packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package11.011.1 archiver/unrar 3.8.5 4.1.4 audio/sox 14.3.0 14.3.2 backup/rdiff-backup 1.2.1 1.3.3 communication/im/pidgin 2.10.0 2.10.5 compress/gzip 1.3.5 1.4 compress/xz not included 5.0.1 database/sqlite-3 3.7.6.3 3.7.11 desktop/remote-desktop/tigervnc 1.0.90 1.1.0 desktop/window-manager/xcompmgr 1.1.5 1.1.6 desktop/xscreensaver 5.12 5.15 developer/build/autoconf 2.63 2.68 developer/build/autoconf/xorg-macros 1.15.0 1.17 developer/build/automake-111 not included 1.11.2 developer/build/cmake 2.6.2 2.8.6 developer/build/gnu-make 3.81 3.82 developer/build/imake 1.0.4 1.0.5 developer/build/libtool 1.5.22 2.4.2 developer/build/makedepend 1.0.3 1.0.4 developer/documentation-tool/doxygen 1.5.7.1 1.7.6.1 developer/gnu-binutils 2.19 2.21.1 developer/java/jdepend not included 2.9 developer/java/jdk-6 1.6.0.26 1.6.0.35 developer/java/jdk-7 1.7.0.0 1.7.0.7 developer/java/jpackage-utils not included 1.7.5 developer/java/junit 4.5 4.10 developer/lexer/jflex not included 1.4.1 developer/parser/byaccj not included 1.14 developer/parser/java_cup not included 0.10 developer/quilt 0.47 0.60 developer/versioning/git 1.7.3.2 1.7.9.2 developer/versioning/mercurial 1.8.4 2.2.1 developer/versioning/subversion 1.6.16 1.7.5 diagnostic/constype 1.0.3 1.0.4 diagnostic/nmap 5.21 5.51 diagnostic/scanpci 0.12.1 0.13.1 diagnostic/wireshark 1.4.8 1.8.2 diagnostic/xload 1.1.0 1.1.1 editor/gnu-emacs 23.1 23.4 editor/vim 7.3.254 7.3.600 file/lndir 1.0.2 1.0.3 image/editor/bitmap 1.0.5 1.0.6 image/gnuplot 4.4.0 4.6.0 image/library/libexif 0.6.19 0.6.21 image/library/libpng 1.4.8 1.4.11 image/library/librsvg 2.26.3 2.34.1 image/xcursorgen 1.0.4 1.0.5 library/audio/pulseaudio not included 1.1 library/cacao 2.3.0.0 2.3.1.0 library/expat 2.0.1 2.1.0 library/gc 7.1 7.2 library/graphics/pixman 0.22.0 0.24.4 library/guile 1.8.4 1.8.6 library/java/javadb 10.5.3.0 10.6.2.1 library/java/subversion 1.6.16 1.7.5 library/json-c not included 0.9 library/libedit not included 3.0 library/libee not included 0.3.2 library/libestr not included 0.1.2 library/libevent 1.3.5 1.4.14.2 library/liblouis not included 2.1.1 library/liblouisxml not included 2.1.0 library/libtecla 1.6.0 1.6.1 library/libtool/libltdl 1.5.22 2.4.2 library/nspr 4.8.8 4.8.9 library/openldap 2.4.25 2.4.30 library/pcre 7.8 8.21 library/perl-5/subversion 1.6.16 1.7.5 library/python-2/jsonrpclib not included 0.1.3 library/python-2/lxml 2.1.2 2.3.3 library/python-2/nose not included 1.1.2 library/python-2/pyopenssl not included 0.11 library/python-2/subversion 1.6.16 1.7.5 library/python-2/tkinter-26 2.6.4 2.6.8 library/python-2/tkinter-27 2.7.1 2.7.3 library/security/nss 4.12.10 4.13.1 library/security/openssl 1.0.0.5 (1.0.0e) 1.0.0.10 (1.0.0j) mail/thunderbird 6.0 10.0.6 network/dns/bind 9.6.3.4.3 9.6.3.7.2 package/pkgbuild not included 1.3.104 print/filter/enscript not included 1.6.4 print/filter/gutenprint 5.2.4 5.2.7 print/lp/filter/foomatic-rip 3.0.2 4.0.15 runtime/java/jre-6 1.6.0.26 1.6.0.35 runtime/java/jre-7 1.7.0.0 1.7.0.7 runtime/perl-512 5.12.3 5.12.4 runtime/python-26 2.6.4 2.6.8 runtime/python-27 2.7.1 2.7.3 runtime/ruby-18 1.8.7.334 1.8.7.357 runtime/tcl-8/tcl-sqlite-3 3.7.6.3 3.7.11 security/compliance/openscap not included 0.8.1 security/nss-utilities 4.12.10 4.13.1 security/sudo 1.8.1.2 1.8.4.5 service/network/dhcp/isc-dhcp 4.1 4.1.0.6 service/network/dns/bind 9.6.3.4.3 9.6.3.7.2 service/network/ftp (ProFTPD) 1.3.3.0.5 1.3.3.0.7 service/network/samba 3.5.10 3.6.6 shell/conflict 0.2004.9.1 0.2010.6.27 shell/pipe-viewer 1.1.4 1.2.0 shell/zsh 4.3.12 4.3.17 system/boot/grub 0.97 1.99 system/font/truetype/liberation 1.4 1.7.2 system/library/freetype-2 2.4.6 2.4.9 system/library/libnet 1.1.2.1 1.1.5 system/management/cim/pegasus 2.9.1 2.11.0 system/management/ipmitool 1.8.10 1.8.11 system/management/wbem/wbemcli 1.3.7 1.3.9.1 system/network/routing/quagga 0.99.8 0.99.19 system/rsyslog not included 6.2.0 terminal/luit 1.1.0 1.1.1 text/convmv 1.14 1.15 text/gawk 3.1.5 3.1.8 text/gnu-grep 2.5.4 2.10 web/browser/firefox 6.0.2 10.0.6 web/browser/links 1.0 1.0.3 web/java-servlet/tomcat 6.0.33 6.0.35 web/php-53 not included 5.3.14 web/php-53/extension/php-apc not included 3.1.9 web/php-53/extension/php-idn not included 0.2.0 web/php-53/extension/php-memcache not included 3.0.6 web/php-53/extension/php-mysql not included 5.3.14 web/php-53/extension/php-pear not included 5.3.14 web/php-53/extension/php-suhosin not included 0.9.33 web/php-53/extension/php-tcpwrap not included 1.1.3 web/php-53/extension/php-xdebug not included 2.2.0 web/php-common not included 11.1 web/proxy/squid 3.1.8 3.1.18 web/server/apache-22 2.2.20 2.2.22 web/server/apache-22/module/apache-sed 2.2.20 2.2.22 web/server/apache-22/module/apache-wsgi not included 3.3 x11/diagnostic/xev 1.1.0 1.2.0 x11/diagnostic/xscope 1.3 1.3.1 x11/documentation/xorg-docs 1.6 1.7 x11/keyboard/xkbcomp 1.2.3 1.2.4 x11/library/libdmx 1.1.1 1.1.2 x11/library/libdrm 2.4.25 2.4.32 x11/library/libfontenc 1.1.0 1.1.1 x11/library/libfs 1.0.3 1.0.4 x11/library/libice 1.0.7 1.0.8 x11/library/libsm 1.2.0 1.2.1 x11/library/libx11 1.4.4 1.5.0 x11/library/libxau 1.0.6 1.0.7 x11/library/libxcb 1.7 1.8.1 x11/library/libxcursor 1.1.12 1.1.13 x11/library/libxdmcp 1.1.0 1.1.1 x11/library/libxext 1.3.0 1.3.1 x11/library/libxfixes 4.0.5 5.0 x11/library/libxfont 1.4.4 1.4.5 x11/library/libxft 2.2.0 2.3.1 x11/library/libxi 1.4.3 1.6.1 x11/library/libxinerama 1.1.1 1.1.2 x11/library/libxkbfile 1.0.7 1.0.8 x11/library/libxmu 1.1.0 1.1.1 x11/library/libxmuu 1.1.0 1.1.1 x11/library/libxpm 3.5.9 3.5.10 x11/library/libxrender 0.9.6 0.9.7 x11/library/libxres 1.0.5 1.0.6 x11/library/libxscrnsaver 1.2.1 1.2.2 x11/library/libxtst 1.2.0 1.2.1 x11/library/libxv 1.0.6 1.0.7 x11/library/libxvmc 1.0.6 1.0.7 x11/library/libxxf86vm 1.1.1 1.1.2 x11/library/mesa 7.10.2 7.11.2 x11/library/toolkit/libxaw7 1.0.9 1.0.11 x11/library/toolkit/libxt 1.0.9 1.1.3 x11/library/xtrans 1.2.6 1.2.7 x11/oclock 1.0.2 1.0.3 x11/server/xdmx 1.10.3 1.12.2 x11/server/xephyr 1.10.3 1.12.2 x11/server/xorg 1.10.3 1.12.2 x11/server/xorg/driver/xorg-input-keyboard 1.6.0 1.6.1 x11/server/xorg/driver/xorg-input-mouse 1.7.1 1.7.2 x11/server/xorg/driver/xorg-input-synaptics 1.4.1 1.6.2 x11/server/xorg/driver/xorg-input-vmmouse 12.7.0 12.8.0 x11/server/xorg/driver/xorg-video-ast 0.91.10 0.93.10 x11/server/xorg/driver/xorg-video-ati 6.14.1 6.14.4 x11/server/xorg/driver/xorg-video-cirrus 1.3.2 1.4.0 x11/server/xorg/driver/xorg-video-dummy 0.3.4 0.3.5 x11/server/xorg/driver/xorg-video-intel 2.10.0 2.18.0 x11/server/xorg/driver/xorg-video-mach64 6.9.0 6.9.1 x11/server/xorg/driver/xorg-video-mga 1.4.13 1.5.0 x11/server/xorg/driver/xorg-video-openchrome 0.2.904 0.2.905 x11/server/xorg/driver/xorg-video-r128 6.8.1 6.8.2 x11/server/xorg/driver/xorg-video-trident 1.3.4 1.3.5 x11/server/xorg/driver/xorg-video-vesa 2.3.0 2.3.1 x11/server/xorg/driver/xorg-video-vmware 11.0.3 12.0.2 x11/server/xserver-common 1.10.3 1.12.2 x11/server/xvfb 1.10.3 1.12.2 x11/server/xvnc 1.0.90 1.1.0 x11/session/sessreg 1.0.6 1.0.7 x11/session/xauth 1.0.6 1.0.7 x11/session/xinit 1.3.1 1.3.2 x11/transset 0.9.1 1.0.0 x11/trusted/trusted-xorg 1.10.3 1.12.2 x11/x11-window-dump 1.0.4 1.0.5 x11/xclipboard 1.1.1 1.1.2 x11/xclock 1.0.5 1.0.6 x11/xfd 1.1.0 1.1.1 x11/xfontsel 1.0.3 1.0.4 x11/xfs 1.1.1 1.1.2 P.S. To get the version numbers for this table, I ran a quick perl script over the output from: % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.1.0.0.24` \ | sort /tmp/11.1 % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.0.0.0.2` \ | sort /tmp/11.0

    Read the article

  • SQL Developer Q&A from ODTUG Tips & Tricks Webcast

    - by thatjeffsmith
    Another great webcast yesterday – if you’re a paying member of ODTUG you can watch the show for yourself in their archives. If not, you can get my slide deck off of SlideShare. About 150 of you brave souls sat through an entire hour of me talking and then 10 more minutes of Q&A. We went through everything rapid-fire style, so I thought I would post the questions and my refined answers here for your perusal. In the order in which I received them: You showed the preference to choose between resultsets in same tab or ain a new tab. I understand that we can not have it both using different hotkeys? For example: F5 run and resultset to same tab, ctrl-f5 same but to new tab? Sometimes you want the one other times the other. The questioner is asking about this preference, Tools Preferences Database Worksheet ‘Show query results in new tabs.’ This is an all or nothing proposition. But, there’s another, perhaps better way: the document PINs. If you have a result set you don’t want to lose, ‘pin it.’ Pin multiple result sets or plans for review and comparisons. You mentioned that sometimes it’s hard to remember where a certain preference is. I agree. So enhancement request: add a search-box to the preferences window. Maybe like in, for example, UltraEdit. It shows you all preferences containing your search criteria. Actually, we do have a search mechanism type the search string, we auto-filter the preferences Is there a version of SQL Developer that will connect to an 8i database (Yes, I realize how old that database version is!) Sorry, no. We also don’t have a version that will run on Windows 3.11 for Workgroups…probably. How do we access your blog? Carefully, and with much trepidation. When you’re ready, go to http://www.thatjeffsmith.com Is there a way to get good formatting with predefined settings? I believe the questioner is referring to the script output a la SQL*Plus formatting commands. Yes, there is. You can build your formatting commands into your login.sql script, and those will be applied for your script execution sessions. Example here. Why this version 4.0 doesn’t support external plugins? It does, it just requires the plugin developer to re-factor it for OSGi. This came about when we updated the JDeveloper framework to the later 11g/12c stuff. Any change in hookup with SVN? The only change with Subversion is that internally we’re using 1.7 stuff now. You can use SQLDev to work with a 1.8 SVN server, but if you get a working copy with a 1.8 client SQLDev won’t be able to do anything with it… Command line utilities ? improvements Yes! The long answer is here. Is that a Hint or a Comment?? /*CSV*/ It’s a comment – the database won’t recognize it, but SQLDev does when it goes through our statement pre-processor. We’ll redirect the output through our CSV formatter before displaying the results in the Script Output panel. That’s why this will ONLY work in SQL Developer. Are you selecting “”Run Script”" to get that CSV or HTML output, rather than “”Run Statement”"? Yes, the formatter hints like the CSV one mentioned above only make sense in a script output panel vs a grid. How do you save relational models once they’re defined? I’ve had trouble with setting one up, “”saving”" it, then the design work I did is longer there when loading it later. File – Data Modeler – Save. If you’re running the Modeler inside of SQL Developer, the menu’ing interface can get a bit tricky. That’s why I recommend using the stand along if you’re doing anything with a model that takes more than 5 minutes. See how the Data Modeler menus are folded up under the SQL Dev menus? Can u unplug and plug into another container in a database with only sqldeveloper? Yes, you can ‘Detach’ a multitentant 12c Database ‘pluggable’ and plug it into another instance. You have the option to copy or move the files. This isn’t a trivial operation, pay attention Can you run APEX code directly on the adopter? No, at least not as I understand your question. Give me an example and I can give you a better example. Is there a way that when u click on a particular table it wouldn’t show the table with the info but just to see the columns underneath clicking on the node? Yes, another one of my tips! Disable Tools Preferences Datbase ObjectViewer ‘Open Object on Single Click.’ Is there a patch to allow a double click on a procedure on an open package body to take you to that procedure in the editor? This has been fixed for EA3 – to be released soon. Can you open the spec with the body? You can open the spec or the body, and then also open the other. But you can’t open both with a single click. So if you want you can set it to CSV but can you also see it as a regular result set in rows and then click in the results to export to excel? If you run your query as a statement with Ctrl-Enter, you can send the data to Excel via the Export dialog. Will it do intellisense like using the alias and pop up the column, object names? Yes! You can select more than one column… Can a DBA turn off items from a high level for users so the only thing they can perform would be selects? A DBA should turn things ON, not OFF. Create a user with only CONNECT and required SELECT privs and you’re good to go, regardless of which application they are using. I use PL/SQL Developer from allround automations and was SQL Developer illiterate and now I like this for myself as a DBA. Now I get to train developers on this tool since they have been asking how to use this tool. Thank you. No, THANK YOU! Can you run multi queries in the worksheet after you added it to the worksheet? Yes, highlight what you want to run, and hit Ctrl-Enter. Can you export the result sets to excel, etc. Yes. In version 4.0 and going forward, I recommend you use the XLSX option for exports. It will run faster and consume much, much less memory. Will this be available after the webinar? If you are a ODTUG member, check out the webinar recordings in the archives. That’s worth the $99 right there. Ask your boss if they have $99 in their training budget for you. If not, maybe time to look for another job? Can you run command lines from this tool? Like executes without issuing a command line prompt? Ok, I’m stumped on this one. Not sure what you’re asking. You can setup external tools under the Tools menu, and from there you could probably rig what you’re looking for, but I’m not sure what you’re looking for… This maybe?Where and when to put the program Is there any way to save a copy database command set (certain tables/views etc) in a script? Yes! Create a cart with the objects you want to be used in the Copy. Then use the new command-line interface to kick off SQL Developer to do the copy of those said objects. How can we export the preference and then import them into different or same version of SQL Developer ? Today, there’s no interface for this. But you could copy the files around manually…Kris Rice has a cool idea where you can set your preferences to be saved to your local drop box folder and then you can use SQL Developer from anywhere with the same preferences What happens to SQL*Plus commands like COL & BREAK Nothing. Those are not currently supported. Is there a place where all “”hotkey”" functionality is listed? thanks Yes. Tools – Preferences – Shortcut Keys. And you can change them! Any tips for the DBA side of things? will the SQL generated for objects have more information (e.g. user privileges) in v4? You can get this now. In Tools – Preferences – Database – Utilities – Export, check ‘Grants.’ Voila! You now have the code necessary to recreate your object privileges Is there a limit on the number of rows that could be imported / exported from/to excel ? The only hard-coded limit lies in Excel. For best performance, use v4 and XLSX formats for Exports. Is there a way to see/watch active sessions to see current SQL and the explain plan being used, etc. Kind of like that frog product. Cough, yes. Tools – Monitor Sessions. Click on session, see SQL and plan. The plan was added in v4. If you’re not in version 4, use the Reports – Active Sessions to get the plans. In the DBA section is there a way to manage say tablespaces to add data files, shrink, edit profiles, etc. Yes, we support all of that. View – DBA. Connect, go to the Storage node. Are you (Jeff) available for a live presentation at our Oracle User Group here in Indiana? Maybe. Email me and we’ll see, [email protected] Where do I go to download sql developer 4.0? The Internet of course! Can you directly edit query results? Nope. But what I think you’re asking is, can I edit the data in the tables that are reflected in my query results? You can change the query results by changing your query of course. Or this. Can you show html example? Sure. I’d embed the HTML here, but it’s a lot of code, try it for yourself! How can I quickly close many SQL worksheet windows, but not all? Window – Documents. Multi-select, hit the ‘Close Document(s)’ button. What does the vertical red line denote? That’s the margin. Tells you when you’ve typed too far and it’s time for a carriage return. Did DBA/Database Status/Instance Viewer make it officially into 4.0? It was sort-of included in the first EA. I have NO idea what you’re talking about, WINK-WINK. No, it’s not in v4.0. Is there a “”handy”" way to debug trigger code? Yes, open your trigger. Hit the debug button. Works great as long as it’s a DML trigger. Will you make your presentation file available for us ( in PPT and/or PDF format ) ? It’s on SlideShare. How do you get SqlDeveloper to escape ‘ correctly when you use the wizard to export data as insert statements? If it’s not doing that, it’s a bug. I’ll take a look at that scenario ASAP.

    Read the article

  • Metro, Authentication, and the ASP.NET Web API

    - by Stephen.Walther
    Imagine that you want to create a Metro style app written with JavaScript and you want to communicate with a remote web service. For example, you are creating a movie app which retrieves a list of movies from a movies service. In this situation, how do you authenticate your Metro app and the Metro user so not just anyone can call the movies service? How can you identify the user making the request so you can return user specific data from the service? The Windows Live SDK supports a feature named Single Sign-On. When a user logs into a Windows 8 machine using their Live ID, you can authenticate the user’s identity automatically. Even better, when the Metro app performs a call to a remote web service, you can pass an authentication token to the remote service and prevent unauthorized access to the service. The documentation for Single Sign-On is located here: http://msdn.microsoft.com/en-us/library/live/hh826544.aspx In this blog entry, I describe the steps that you need to follow to use Single Sign-On with a (very) simple movie app. We build a Metro app which communicates with a web service created using the ASP.NET Web API. Creating the Visual Studio Solution Let’s start by creating a Visual Studio solution which contains two projects: a Windows Metro style Blank App project and an ASP.NET MVC 4 Web Application project. Name the Metro app MovieApp and the ASP.NET MVC application MovieApp.Services. When you create the ASP.NET MVC application, select the Web API template: After you create the two projects, your Visual Studio Solution Explorer window should look like this: Configuring the Live SDK You need to get your hands on the Live SDK and register your Metro app. You can download the latest version of the SDK (version 5.2) from the following address: http://www.microsoft.com/en-us/download/details.aspx?id=29938 After you download the Live SDK, you need to visit the following website to register your Metro app: https://manage.dev.live.com/build Don’t let the title of the website — Windows Push Notifications & Live Connect – confuse you, this is the right place. Follow the instructions at the website to register your Metro app. Don’t forget to follow the instructions in Step 3 for updating the information in your Metro app’s manifest. After you register, your client secret is displayed. Record this client secret because you will need it later (we use it with the web service): You need to configure one more thing. You must enter your Redirect Domain by visiting the following website: https://manage.dev.live.com/Applications/Index Click on your application name, click Edit Settings, click the API Settings tab, and enter a value for the Redirect Domain field. You can enter any domain that you please just as long as the domain has not already been taken: For the Redirect Domain, I entered http://superexpertmovieapp.com. Create the Metro MovieApp Next, we need to create the MovieApp. The MovieApp will: 1. Use Single Sign-On to log the current user into Live 2. Call the MoviesService web service 3. Display the results in a ListView control Because we use the Live SDK in the MovieApp, we need to add a reference to it. Right-click your References folder in the Solution Explorer window and add the reference: Here’s the HTML page for the Metro App: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MovieApp</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> <!-- WebServices references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head> <body> <div id="tmplMovie" data-win-control="WinJS.Binding.Template"> <div class="movieItem"> <span data-win-bind="innerText:title"></span> <br /><span data-win-bind="innerText:director"></span> </div> </div> <div id="lvMovies" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplMovie') }"> </div> </body> </html> The HTML page above contains a Template and ListView control. These controls are used to display the movies when the movies are returned from the movies service. Notice that the page includes a reference to the Live script that we registered earlier: <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> The JavaScript code looks like this: (function () { "use strict"; var REDIRECT_DOMAIN = "http://superexpertmovieapp.com"; var WEBSERVICE_URL = "http://localhost:49743/api/movies"; function init() { WinJS.UI.processAll().done(function () { // Get element and control references var lvMovies = document.getElementById("lvMovies").winControl; // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then( function(response) { // Get the authentication token var authenticationToken = response.session.authentication_token; // Call the web service var options = { url: WEBSERVICE_URL, headers: { authenticationToken: authenticationToken } }; WinJS.xhr(options).done( function (xhr) { var movies = JSON.parse(xhr.response); var listMovies = new WinJS.Binding.List(movies); lvMovies.itemDataSource = listMovies.dataSource; }, function (xhr) { console.log(xhr.statusText); } ); }, function(response) { throw WinJS.ErrorFromName("Failed to login!"); } ); }); } document.addEventListener("DOMContentLoaded", init); })(); There are two constants which you need to set to get the code above to work: REDIRECT_DOMAIN and WEBSERVICE_URL. The REDIRECT_DOMAIN is the domain that you entered when registering your app with Live. The WEBSERVICE_URL is the path to your web service. You can get the correct value for WEBSERVICE_URL by opening the Project Properties for the MovieApp.Services project, clicking the Web tab, and getting the correct URL. The port number is randomly generated. In my code, I used the URL  “http://localhost:49743/api/movies”. Assuming that the user is logged into Windows 8 with a Live account, when the user runs the MovieApp, the user is logged into Live automatically. The user is logged in with the following code: // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then(function(response) { // Do something }); The scopes setting determines what the user has permission to do. For example, access the user’s SkyDrive or access the user’s calendar or contacts. The available scopes are listed here: http://msdn.microsoft.com/en-us/library/live/hh243646.aspx In our case, we only need the wl.signin scope which enables Single Sign-On. After the user signs in, you can retrieve the user’s Live authentication token. The authentication token is passed to the movies service to authenticate the user. Creating the Movies Service The Movies Service is implemented as an API controller in an ASP.NET MVC 4 Web API project. Here’s what the MoviesController looks like: using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using JWTSample; using MovieApp.Services.Models; namespace MovieApp.Services.Controllers { public class MoviesController : ApiController { const string CLIENT_SECRET = "NtxjF2wu7JeY1unvVN-lb0hoeWOMUFoR"; // GET api/values public HttpResponseMessage Get() { // Authenticate // Get authenticationToken var authenticationToken = Request.Headers.GetValues("authenticationToken").FirstOrDefault(); if (authenticationToken == null) { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Validate token var d = new Dictionary<int, string>(); d.Add(0, CLIENT_SECRET); try { var myJWT = new JsonWebToken(authenticationToken, d); } catch { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Return results return Request.CreateResponse( HttpStatusCode.OK, new List<Movie> { new Movie {Title="Star Wars", Director="Lucas"}, new Movie {Title="King Kong", Director="Jackson"}, new Movie {Title="Memento", Director="Nolan"} } ); } } } Because the Metro app performs an HTTP GET request, the MovieController Get() action is invoked. This action returns a set of three movies when, and only when, the authentication token is validated. The Movie class looks like this: using Newtonsoft.Json; namespace MovieApp.Services.Models { public class Movie { [JsonProperty(PropertyName="title")] public string Title { get; set; } [JsonProperty(PropertyName="director")] public string Director { get; set; } } } Notice that the Movie class uses the JsonProperty attribute to change Title to title and Director to director to make JavaScript developers happy. The Get() method validates the authentication token before returning the movies to the Metro app. To get authentication to work, you need to provide the client secret which you created at the Live management site. If you forgot to write down the secret, you can get it again here: https://manage.dev.live.com/Applications/Index The client secret is assigned to a constant at the top of the MoviesController class. The MoviesController class uses a helper class named JsonWebToken to validate the authentication token. This class was created by the Windows Live team. You can get the source code for the JsonWebToken class from the following GitHub repository: https://github.com/liveservices/LiveSDK/blob/master/Samples/Asp.net/AuthenticationTokenSample/JsonWebToken.cs You need to add an additional reference to your MVC project to use the JsonWebToken class: System.Runtime.Serialization. You can use the JsonWebToken class to get a unique and validated user ID like this: var user = myJWT.Claims.UserId; If you need to store user specific information then you can use the UserId property to uniquely identify the user making the web service call. Running the MovieApp When you first run the Metro MovieApp, you get a screen which asks whether the app should have permission to use Single Sign-On. This screen never appears again after you give permission once. Actually, when I first ran the app, I get the following error: According to the error, the app is blocked because “We detected some suspicious activity with your Online Id account. To help protect you, we’ve temporarily blocked your account.” This appears to be a bug in the current preview release of the Live SDK and there is more information about this bug here: http://social.msdn.microsoft.com/Forums/en-US/messengerconnect/thread/866c495f-2127-429d-ab07-842ef84f16ae/ If you click continue, and continue running the app, the error message does not appear again.  Summary The goal of this blog entry was to describe how you can validate Metro apps and Metro users when performing a call to a remote web service. First, I explained how you can create a Metro app which takes advantage of Single Sign-On to authenticate the current user against Live automatically. You learned how to register your Metro app with Live and how to include an authentication token in an Ajax call. Next, I explained how you can validate the authentication token – retrieved from the request header – in a web service. I discussed how you can use the JsonWebToken class to validate the authentication token and retrieve the unique user ID.

    Read the article

  • top tweets WebLogic Partner Community – June 2012

    - by JuergenKress
    Send your tweets @wlscommunity #WebLogicCommunity and follow us at http://twitter.com/wlscommunity OTNArchBeat? Free Virtual Developer Day: Oracle ADF and Oracle Fusion Middleware Development http://bit.ly/MxuNAg AMIS, Oracle & Java? Checklist veearts nu ook op iPad. @amis_services Mobile integratie met Oracle Fusion Middleware http://dld.bz/buwsM #OSB #SOA WhitehorsesWhiteblog: Troubleshoot JVM crashes of Weblogic: CompilerThread (http://bit.ly/KcGzZK) Jon petter hjulstad E-vita is now Apps Grid Specialized! ODTUG Fusion Middleware Sessions RT @OTNArchBeat: ODTUG Kscope12 - June 24-28 - San Antonio, TX http://bit.ly/LlWkNV OTNArchBeat? Free Event: Modern #Java Development, in/outside the Enterprise - May 30 - Redwood Shores, CA http://bit.ly/LfB79a ADF Community DE? Oracle Advanced ADF 11g Partner Workshop Düsseldorf /Germany (english) June 26-29, click here to see Nicolas Lorain? Best Practices for #JavaFX 2 Enterprise Applications (Part Two) http://buff.ly/Lk1DBn by Jim Weaver shay shmeltzer? #Oracle Developers in #Israel - don't miss the free #ADF workshop July 2nd - get hands-on with Oracle ADF -here OTNArchBeat? Java at JAXconf | Tori Wieldt http://bit.ly/LdoLS2 Anand Akela? #Oracle Customers and Partners – Get your free pass to @CloudExpo in New York, June 11 to 14, http://goo.gl/RpYFT <- Stop by booth #511 OracleSupport_WLS? Did you know that since 3/15/12 #WebLogic Server 12.1.1.0 is certified for production with JDK 7? http://bit.ly/IYJE0L Sharat? Highly useful #JavaFX best practices blog by @JavaFXpert More details here ADF EMG How to set up a productive ADF Dev Env - discussion started by @baigsorcl. Click here to Read and comment. OracleSupport_WLS Upcoming #webcast: Diagnosing #weblogic performance issues through #java thread dumps http://bit.ly/M4O9qF My Oracle Support? New to Oracle Support? - Webcast on Support Basics webcast May 22 10:30 Central Europe. Register @ http://bit.ly/J8o0WG Mohamad Afshar? Cloud Expo – Oracle Customers and Partners – get your free pass to Cloud Expo in New York, June 11 to 14, http://goo.gl/RpYFT OTNArchBeat Oracle VM 3.1 is here | @Ronenkofman http://bit.ly/JriWTq Oracle Exalogic? RT @D0uglasPhillips: ExalogicTV New Video Introducing Oracle Secure Global Desktop for #Exalogic!! http://bit.ly/nwkrCu OracleBlogs? Java EE6 and WebLogic YouTube video channels http://ow.ly/1jVcYJ Oracle WebLogic RT @aleftik: Excited to spend some time today playing around with the WebSockets SDK http://bit.ly/NoTtri WebLogic Community Java EE6 and WebLogic YouTube video channels http://wp.me/p1LMIb-h0 OracleSupport_WLS New tutorial! How to use the #JMS #API to create a message producer with #GlassFish and #NetBeans http://bit.ly/Juqjn JDeveloper & ADF? Tip when installing JDeveloper 11.1.2.2.0 version http://dlvr.it/1b48s1 WebLogic Community Middleware Oracle Excellence Awards 2012 – HAPPY NEW YEAR! Click here to read WebLogicCommunity #opn #oracle#Specialization #opnaward Steven Davelaar? Improve performance of your ADF app using lazy, on-demand querying of detail view objects: Click here OracleBlogs? Middleware Oracle Excellence Awards 2012 & HAPPY NEW YEAR! http://ow.ly/1kahzZ OracleSupport_WLS Upgrading from #weblogic 9.2.x to 10.3.x? http://bit.ly/Kqzl9N AMIS, Oracle & Java “@JDeveloper: Logout from an ADF application http://dlvr.it/1fQBnm” WebLogic Community UK OUG call for papers–your middleware success! Click here #UKOUG #soacommunity #OPN Whitehorses Whiteblog: Enterprise Manager: Manage your Fusion Middleware logfiles (http://bit.ly/KQlZkR) WebLogic Community? @Jphjulstad HI Jon, should we send Pizza when you go in production with your WebLogic 12c project? Whish you success! #WebLogicCommunity Sabine Leitner ADF Einsteigerworkshops je 2 Tage im Juni in HAM, BLN, HANN #Oracle #WLS http://bit.ly/LcOIzB @OracleWebLogic @OracleAppGrid@soacommunity Andreas Koop new post Java Heap Monitor in JDeveloper http://bit.ly/LgSk85 Sabine Leitner? #Oracle Kundentag mit Vorträgen von Sparkasse, Schufa, LBBW, Allianz über FMW & Exa Lösungen! 21.06. FRA http://bit.ly/JtwE3v @wlscommunity NetBeans Team RT @chadlung: Installing and configuring #NetBeans 7.1.2 and the #Java JDK 1.7 on OS X: http://www.giantflyingsaucer.com/blog/p=3760 #osx WebLogic Community Happy New Year #WeblogicCommunity thanks for the business! Time for a drink http://pic.twitter.com/K34KFbvH WebLogic Community UK OUG call for papers&ndash;your middleware success! http://wp.me/p1LMIb-gU WebLogic Community? Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! http://wp.me/p1LMIb-h6 Oracle WebLogic? RT @wlscommunity: WebLogic World Record Two Processor Result with SPECjEnterprise2010 Benchmark Click here to read #weblogic #sunfire #li Marc? Relocate wlst script for all the logfiles in your domain @wlscommunity, http://tinyurl.com/btbjcco WebLogic Community WebLogic World Record Two Processor Result with SPECjEnterprise2010 Benchmark Click here #WebLogicCommunity #weblogic #sunfire Oracle WebLogic MIss a WebLogic Devcast webinar? Catch any of the replays in the series on-demand! #WebLogic #JavaEE #coherence http://bit.ly/LNGa4p JDeveloper & ADF? Bean DataControl - Edit table records http://dlvr.it/1ZWqCx Justin Kestelyn? Contents of "Virtual Developer Day: Java SE 7 and JavaFX 2.0" are now avail on demand; no reg http://tinyurl.com/78nxnyo Frank Nimphius? Preparing 12c new features for DOAG 2012 Development - June 14th in Bonn (http://development.doag.org) WebLogic Community? Middleware Oracle Excellence Awards 2012&ndash;HAPPY NEW YEAR! http://wp.me/p1LMIb-he JDeveloper & ADF Placeholder Watermarks with ADF 11.1.2 http://dlvr.it/1ZWDc9 Oracle ACE Program? May edition #ACE newsletter now available online. http://bit.ly/LKA2de chriscmuir New blog post: Which JDeveloper is right for me? http://bit.ly/J8sj9e GlassFish? Transactional Interceptors in Java EE 7 - Request for feedback: Linda described how EJB's container-managed tr http://bit.ly/KKuGNJ OracleEnterpriseMgr Oracle Application Testing Suite 12.1 Debuts at StarEast 2012 http://ow.ly/aXcv8 #em12c JAX London First set of speaker session announced for #JAXLondon see: http://bit.ly/L0HSME OTNArchBeat? Oracle Cloud Conference: dates and locations worldwide http://bit.ly/JgNeID NetBeans Team? Video: Create and debug a TestNG test class in #NetBeans IDE: http://ow.ly/b7NEW NetBeans Team #NetBeans tip: Code Template for #Kohana #PHP Framework: http://ow.ly/aWIvY Robin? Started to use the #Oracle #WebLogic Server #Maven Plugin. Really awesome to install a complete #WLS with "mvn wls:install" !@wlscommunity OTNArchBeat? Free Event: Modern #Java Development, in/outside the Enterprise - May 30 - Redwood Shores, CA http://bit.ly/JIN9tf OracleBlogs WebLogic Partner Community Newsletter May 2012 http://ow.ly/1k5TeG Java Certification? Java SE 7 Fundamentals course now available On Demand. Watch a preview now: http://ow.ly/aWYgD Whitehorses Whiteblog: Native IO in WebLogic on Solaris 11 X64 (http://bit.ly/KGM4mp) NetBeans Team? Quick video of FindBugs Integration in #NetBeans IDE 7.2: http://ow.ly/aNece NetBeans Team #JavaFX Scene Builder Docs Updated for 2.2 and #NetBeans 7.2 dev builds: http://ow.ly/b7Nie Duncan Mills? New blog posting on implementing input field watermarks with ADF Faces 11.1.2 Click here #adf WebLogic Community? WebLogic Partner Community Newsletter May 2012 http://wp.me/p1LMIb-h4 OracleBlogs? UK OUG call for papersyour middleware success! http://ow.ly/1jNs49 Nicolas Lorain? Java tip: Deploying #JavaFX apps to multiple environments - JavaWorld http://buff.ly/KDADvu Adam Bien? Java EE and How to Specify The Unconventional With Convention Over Configuration [Free Article]: The free http://bit.ly/JEUkUf Owen Hughes and team?#Oracle #Exalogic #Performance: What? How? Why? Click here GlassFish? SecuritEE in the Cloud: Java EE 7 and the Cloud theme continue to move full steam ahead. In a PaaS environment http://bit.ly/K2RPte JDeveloper & ADF? How to Align Managed Bean Scope and Bean Data Control in Oracle ADF http://dlvr.it/1dngxQ Andrejus Baranovskis Missing New Feature in JDev (11.1.2.2.0) - ADF Methods Security http://fb.me/1jQM1enls OracleSupport_WLS? Tutorial on managing #HTTP Sessions in a #Weblogic #Cluster http://bit.ly/JshESe Oracle WebLogic? ZeroTurnaround developer report: #Spring keeps getting heavier, and #Java EE keeps getting lighter http://bit.ly/JDmKy2 JDeveloper & ADF? How to Search in Views - Part 4 || Oracle ADF http://dlvr.it/1dpDjZ WebLogic Community Java Message Service with Java and Spring Framework on Oracle WebLogic; Webcast May 15th 2012 http://wp.me/p1LMIb-gS Andreas Koop? new post ADF Bug or Feature? Non-Breaking Space outside required icon style http://bit.ly/KDZnUo Oracle WebLogic? Don't miss this month's WebLogic DevCast: WebLogic JMS and Spring JMS http://bit.ly/J6g2ST Tuesday May 15th 10:00am PT JDeveloper & ADF How To Disable SELECT COUNT Execution for ADF Table Rendering http://dlvr.it/1dqKH6 OracleSupport_WLS? #SSL and security has its own Information Center, http://bit.ly/LP8Vil for troubleshooting, install, config and more NetBeans Team? Featured #NetBeans plugin is @Codename_One for creating native apps for major mobile platforms: http://plugins.netbeans.org/ JDeveloper & ADF? Using JDeveloper HTTP Analyser to intercept/forward requests http://dlvr.it/1Yzl4J Nicolas Lorain? Create native looks for JavaFX applications: JavaFX-CSS-Themes · http://buff.ly/M0jel0 by Gregg Setzer Devoxx? Want to make the world a better place? Then get involved in Random Hacks of Kindness on June 2 - 3 in Belgium @ http://www.rhok.be #RHoK WebLogic Community top tweets WebLogic Partner Community – May 2012 Click here #WebLogicCommunity Michel Schildmeijer Oracle Traffic Director 11g http://lnkd.in/-mm3Vy Andrejus Baranovskis? Proactively Monitoring JDeveloper 11g IDE Heap Memory http://fb.me/16YZErPrx Arun Gupta? 80+ attendees building a #javaee6 application using NetBeans/WebLogic at Java Day, Istanbul fun times! http://pic.twitter.com/odY19daW A. Chatziantoniou? Just registered for the Oracle FMW Summer Camp in Lisbon. Looking forward to learn, meet friends and try to buy ice cream on the beach OTNArchBeat Another Myth Debunked: 200 Continuous Redeployments with WebLogic|@munz http://bit.ly/JiPyM7 Oracle WebLogic? Need to learn more on #WebLogic Server #JVM performance tuning? http://bit.ly/MN UxHx GlassFish? Dukes Choice Awards 2012 Nominations Are Open: 2012 Duke's Choice Award are open for nominations. These awards http://bit.ly/Ksk4U3 Justin Kestelyn? Major cloud-related announcements from Larry Ellison and Mark Hurd on June 6 http://bit.ly/KTJiII Nicolas Lorain Transparent Windows (Stage) with #JavaFX 2 : Adam Bien's Weblog http://j.mp/INgq8K WebLogic Community Web Services with JAX and Spring on WebLogic–Webcast May 30th 2012 #WebLogicCommunity #weblogic #opn JDeveloper & ADF Oracle ADF - How to work with Dates http://dlvr.it/1Y70zw OracleBlogs Web Services with JAX and Spring on WebLogicWebcast May 30th 2012 http://ow.ly/1k2WtO Adam Bien? Summer Java EE Workshops: 23.05, Amsterdam Airport Java EE Hacking, Without Airport. The dutch version of Airport http://bit.ly/JeP6hV JDeveloper & ADF ADF 11g: BC4J or EJB3. http://bit.ly/JVVFZF ADF EMG? Great discussion with JSF guru Andy Schwartz on the forum - 38 posts! Check it out: here Devoxx? Oracle (http://www.oracle.com ) joins Devoxx 2012 as the first Premium partner, welcome aboard! Nicolas Lorain Developing a Simple Todo Application using #JavaFX, #Java and #MongoDB- Part-1JavaBeat http://j.mp/IDGxLA Nicolas Lorain Preview of JavaFX 2.2 canvas feature > Harmonic Code: Death bitmaps could be beautiful... Part I http://buff.ly/KyAXg5 #JavaFX OTNArchBeat?? New York Coherence Special Interest Group (NYCSIG) - May 24 - NYC http://bit.ly/JzJcbT WebLogic Community iAS upgrade to WebLogic watch #C2B2 online seminar http://youtu.be/5m2CNUjBIGQ #WebLogicCommunity Ruth Collett? Join Oracle in #Joburg on May 21 for OTN Developer Day - sessions on #Java #JavaEE 6/7 and much more! http://bit.ly/IENwnD WebLogic Community? Sending out invitations to our advanced Fusion Middleware Summer Camps! Want to learn more register for the community Ruth Collett? Join @ArunGupta in Istanbul this Monday to hear the latest on #JavaEE 6/7 http://bit.ly/Je63cc GlassFish? NetBeans 7.2 Beta - Built for Speed, Deploy Apps to Oracle Cloud: NetBeans 7.2 Beta is now available. The http://bit.ly/LxMMTK Lucas Jellema My latest SlideShare upload : Java ain't scary - introducing Java to PL/SQ. here via @slideshare JDeveloper & ADF? #Developer #free#ADF training in #Scotland - June 13. More information: http://bit.ly/LbPLlf AMIS, Oracle & Java? AMIS behaalt als eerste in Nedeland de Oracle ADF specialisatie - Channelworld nieuwsChannelconnect: http://bit.ly/JzAcB4 WebLogic Community Web Services with JAX and Spring on WebLogic&ndash;Webcast May 30th 2012 http://wp.me/p1LMIb-gX Nicolas Lorain?@ JavaFX-based SimpleDateFormat Demonstrator http://j.mp/KFCVOi #JavaFX via Dustin Marx Oracle Exalogic? Are you an Oracle partner? There's news on the Oracle Partner Network about #Exalogic specializations - http://bit.ly/Mt3ANY JDeveloper & ADF Shorter URL for your ADF application http://dlvr.it/1XqNLY OTNArchBeat? Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 http://bit.ly/JAa0Lx OTNArchBeat? Java EE 6 Sample Application on WebLogic 12c: Conference Planner | @arungupta http://bit.ly/LPvof4 JDeveloper & ADF? Excellent example of Oracle ADF - Google Maps/Earth integration http://dlvr.it/1cbc80 JDeveloper & ADF Setting Up JDeveloper's Embedded WLS for MySQL http://dlvr.it/1c4b8P JDeveloper & ADF? Solution for Sharing Global User Data in ADF BC http://dlvr.it/1cc7SJ Java? Java Magazine May/June #javaee #javafx #javame #openJDK #hotspot #wicket #lotsmore http://ow.ly/aX07v Oracle WebLogic? http://bit.ly/JxQsnS if you have trouble finding the right #patchset when doing an upgrade to your #weblogic server OracleEnterpriseMgr 15 minutes to go before we start our Application Testing Suite 12.1 webcast. http://bit.ly/JHyTEe Learn from the lead PM what's new. #em12c Sten Vesterli Eating your own dog food - Oracle support site finally in ADF: http://lnkd.in/s6hg_p Adam Bien Project: "Jenever" (=poison) checked-in with GIT:here CU at http://workshops.adam-bien.com. Thanks for attending! OTNArchBeat Web Service Development with NetBeans and Testing with WebLogic Admin Console | @munz http://bit.ly/JcWk34 Please feel free to send us your news! And add your blog to our SOA blog wiki

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • Droid's mediaserver dies on camera.takePicture()

    - by SirBoss
    On Motorola Droid, Firmware 2.1-update1, Kernel 2.9.29-omap1, Build # ESE81 When attempting to take a picture, mediaserver dies with a segmentation fault. I've tried putting takePicture in a timer and running it a few seconds after camera initialization to check for race conditions, but no change. Just calling Camera.open() doesn't cause the crash. Also, calling Camera.open() causes what I think is the autofocus motor to make a sort of ticking sound. Code that breaks: import android.app.Activity; import android.os.Bundle; public final class ChopperMain extends Activity { public void onCreate(Bundle savedInstanceState) { try { Camera camera = Camera.open(); catch (Exception e) { e.printStackTrace(); } camera.takePicture( new Camera.ShutterCallback() { public void onShutter() { ; } }, new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { ; } }, new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { ; } }, new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { System.out.println("Ta da."); } } }); } catch (Exception e) { e.printStackTrace(); } } } Debug Log: D/CameraHal(10158): CameraSettings constructor D/CameraHal(10158): CameraHal constructor D/CameraHal(10158): Model ID: Droid D/CameraHal(10158): Software ID 2.1-update1 D/dalvikvm( 988): GC freed 2 objects / 56 bytes in 215ms D/ViewFlipper( 1074): updateRunning() mVisible=false, mStarted=true, mUserPresent=false, mRunning=false I/HPAndroidHAL(10158): Version 2988. Build Time: Oct 26 2009:11:21:55. D/CameraHal(10158): 19 default parameters D/CameraHal(10158): Immediate Zoom/1:0. Current zoom level/1:0 D/CameraHal(10158): CameraHal constructor exited ok D/CameraService(10158): Client::Client X (pid 10400) D/CameraService(10158): CameraService::connect X D/CameraService(10158): takePicture (pid 10400) I/DEBUG (10159): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG (10159): Build fingerprint: 'verizon/voles/sholes/sholes:2.1-update1/ESE81/29593:user/release-keys' I/DEBUG (10159): pid: 10158, tid: 10158 >>> /system/bin/mediaserver <<< I/DEBUG (10159): signal 11 (SIGSEGV), fault addr 00000008 I/DEBUG (10159): r0 00000000 r1 00000000 r2 a969030c r3 a9d1bfe0 I/DEBUG (10159): r4 00045eb0 r5 0000eb10 r6 000153a0 r7 a9c89fd2 I/DEBUG (10159): r8 00000000 r9 00000000 10 00000000 fp 00000000 I/DEBUG (10159): ip a969085c sp bec4fba0 lr a9689c65 pc a9d1bfde cpsr 60000030 I/DEBUG (10159): #00 pc 0001bfde /system/lib/libutils.so I/DEBUG (10159): #01 pc 00009c62 /system/lib/libcamera.so I/DEBUG (10159): #02 pc 00007b0c /system/lib/libcameraservice.so I/DEBUG (10159): #03 pc 00021f98 /system/lib/libui.so I/DEBUG (10159): #04 pc 00015514 /system/lib/libbinder.so I/DEBUG (10159): #05 pc 00018dd8 /system/lib/libbinder.so I/DEBUG (10159): #06 pc 00018fa6 /system/lib/libbinder.so I/DEBUG (10159): #07 pc 000087d2 /system/bin/mediaserver I/DEBUG (10159): #08 pc 0000c228 /system/lib/libc.so I/DEBUG (10159): I/DEBUG (10159): code around pc: I/DEBUG (10159): a9d1bfcc bd1061e3 f7f3b510 bd10e97e 4d17b570 I/DEBUG (10159): a9d1bfdc 6886a300 460418ed fff4f7ff d10a4286 I/DEBUG (10159): a9d1bfec 46234913 20054a13 f06f1869 18aa040a I/DEBUG (10159): I/DEBUG (10159): code around lr: I/DEBUG (10159): a9689c54 e0240412 0204f8d0 050cf104 edf0f7fd I/DEBUG (10159): a9689c64 f7fd4628 f8d4ecf2 b1533204 f852681a I/DEBUG (10159): a9689c74 18581c0c 7101f504 ed82f7fd f8c42000 I/DEBUG (10159): I/DEBUG (10159): stack: I/DEBUG (10159): bec4fb60 4000902c /dev/binder I/DEBUG (10159): bec4fb64 a9d19675 /system/lib/libutils.so I/DEBUG (10159): bec4fb68 00002bb4 I/DEBUG (10159): bec4fb6c a9d1b26f /system/lib/libutils.so I/DEBUG (10159): bec4fb70 bec4fbbc [stack] I/DEBUG (10159): bec4fb74 00095080 [heap] I/DEBUG (10159): bec4fb78 a9c8c028 /system/lib/libcameraservice.so I/DEBUG (10159): bec4fb7c a9c8c028 /system/lib/libcameraservice.so I/DEBUG (10159): bec4fb80 00015390 [heap] I/DEBUG (10159): bec4fb84 a9c89fd2 /system/lib/libcameraservice.so I/DEBUG (10159): bec4fb88 00045ebc [heap] I/DEBUG (10159): bec4fb8c afe0f110 /system/lib/libc.so I/DEBUG (10159): bec4fb90 00000000 I/DEBUG (10159): bec4fb94 afe0f028 /system/lib/libc.so I/DEBUG (10159): bec4fb98 df002777 I/DEBUG (10159): bec4fb9c e3a070ad I/DEBUG (10159): #00 bec4fba0 00045eb0 [heap] I/DEBUG (10159): bec4fba4 00045ebc [heap] I/DEBUG (10159): bec4fba8 000153a0 [heap] I/DEBUG (10159): bec4fbac a9689c65 /system/lib/libcamera.so I/DEBUG (10159): #01 bec4fbb0 a9c8c028 /system/lib/libcameraservice.so I/DEBUG (10159): bec4fbb4 00015390 [heap] I/DEBUG (10159): bec4fbb8 000153a0 [heap] I/DEBUG (10159): bec4fbbc a9c87b0f /system/lib/libcameraservice.so I/DEBUG (10159): debuggerd committing suicide to free the zombie! I/DEBUG (10426): debuggerd: Mar 22 2010 17:31:05 W/MediaPlayer( 1021): MediaPlayer server died! I/ServiceManager( 984): service 'media.audio_flinger' died I/ServiceManager( 984): service 'media.player' died I/ServiceManager( 984): service 'media.camera' died I/ServiceManager( 984): service 'media.audio_policy' died W/Camera (10400): Camera server died! W/Camera (10400): ICamera died E/Camera (10400): Error 100 I/System.out(10400): Camera error, code 100 W/AudioSystem( 1021): AudioFlinger server died! W/AudioSystem( 1021): AudioPolicyService server died! I/ (10425): ServiceManager: 0xad08 E/AudioPostProcessor(10425): E/AudioPostProcessor(10425): AudioMgr Error:Failed to open gains file /data/ap_gain.bin E/AudioPostProcessor(10425): E/AudioPostProcessor(10425): AudioMgr Error:Failed to read gains/coeffs from /data E/AudioPostProcessor(10425): Audio coeffs init success. I/CameraService(10425): CameraService started: pid=10425 D/Audio_Unsolicited(10425): in readyToRun D/Audio_Unsolicited(10425): Create socket successful 10 I/AudioFlinger(10425): AudioFlinger's thread 0x11c30 ready to run E/AudioService( 1021): Media server died. E/AudioService( 1021): Media server started. W/AudioPolicyManager(10425): setPhoneState() setting same state 0

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Wednesday, December 01, 2010

    CodePlex Daily Summary for Wednesday, December 01, 2010Popular ReleasesUltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.2 Beta2: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ContentRoot property) - Added properties AccessKey, AccessKeyModifier, AccessKeyElemen...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...Microsoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.MDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitCropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...BCLExtensions: BCL Extensions v1.0: The files associated with v1.0 of the BCL Extensions library.XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)New ProjectsAbstract SQL: ADO.NET Sql classes wrapper; provides a clean fluent interface library that allows you to write very concise code and avoid the repetitiveness of ADO.NET. It can be used in all types of applications, even supports CLR stored procedures. It is written in C# 2.0.AI: The Artificial Intelligence program built on F#.Another .NET wrapper for the MailChimp API: A .NET wrapper for the MailChimp API 1.3 written in F# by DK.App-V Tool Suite: A collection of tools for Microsoft Application Virtualization (App-V). These tools were developed at Sinclair Community College in the process of setting up and then supporting its App-V implementation.b2b: Project-01tBham.CmuCam: A library and GUI front-end for the CMUcam series of cameras for use by .NET-based applications, the GUI can also run standalone.BizTalk Deployment Tool: Yet another BizTalk Deployment Tool to make it easier for BizTalk deployment that needs to support orchestration versioning, multiple environments. Features includes but not limited to: GAC Verification, Receive Locations and Send Ports management, Orchestration States managementCertificate Request (PKCS#10) Generator: A .NET application that can create PKCS#10 Certificate Requests, either by generating a new key or reusing a preexisting one. Minimum requirement : Windows Vista and above. .NET 2.0.Cloud Billing: Cloud billing proposal.Currency Converter: Coding4Fun Windows Phone 7 Currency ConverterDiffLib: A diff implementation class library for .NET 3.5 written in C#.expression Blend 4.0 Comment Uncomment Xaml Code Extension: BlendShortCuts Extension makes it easier for blend users to comment and uncomment xaml code. you'll no longer have to insert tag <!-- --> just press ALt+C and ALT+U it's developed in c# (.Net framework 4.0, microsoft expression library)Game Studio: Game Studio is an Integrated Development Environment “IDE” that helps game developers in designing their games. The software generates code for the mesh, models, pictures and sounds. It has a form designer, code editor and a special framework for the “Game Studio”.Gridify for ASP.NET MVC: Easy solution for grids on top of ASP.NET MVC Make grids from your data tables in a really lightweight manner! How lightweight? Well, exactly TWO line changes. You don't have to add new action parameters or anything. Really simple!In-House Inventory System: a normal inventory management system, normal use case and normal function.In-House Money Saver: just for school project, it may not be useful, however, it is my first project using Microsoft technology.ITICup2009: Programs used in ITICup 2009.libobs++: Implementation of a signal/slot system, created exclusively for developers that uses Visual Studio's 2010 IDE/Compiler. The classes are templated, and really easy to learn. The callbacks are fast, and type-safe.Longan ERP: Open Source Business Solutions.Lucienne - WebScripting Assignment: Lucienne - WebScripting AssignmentMercurial.Net: .NET wrapper class library for the Mercurial Distributed Version Control System (DVCS) - (http://mercurial.selenic.com/), written in C# 3.0 for the .NET 3.5 Client Profile runtime.Mini C++ UI Framework: Mini C++ UI Framework for my work.MinuRaamu: MeieRaamuMobile Device Browser File: The Mobile Browser Definition File contains definitions for individual mobile devices and browsers. At run time, ASP.NET uses the information in the request header to determine what type of device/browser has made the request.NKinect: .NET 4.0 (C++/CLI) based open source implementation of Microsoft Kinect. Currently supports CodeLaboratories NUI SDK, but will be brought to OpenKinect/libfreenect when a Windows version is stable.Oblivion Cell - Oblivion mmo project: We are working on creating a mmorpg mod/Addon for Oblivion using C# and hooking to the accual game with obse and a few other mods. We also use Cell Framework for our base server system.Optional: Optional is a library to create options and commands from command-line arguments. It uses Convention over Configuration to get out of your way. Attributes can be used to set properties which differ from the convention.Paypal adaptive payments using .NET (C#): This is a C# project to help you interface with the PayPal adaptive payments API. https://www.x.com/community/ppx/adaptive_payments. POS bd: Dynamic POS. this project is being devoloped on focused to local market only. the initial project is projected for a single company whose main business is selling lighting-bulb instruments.PowerEvents for Windows PowerShell: A Microsoft Windows PowerShell module to assist with managing permanent WMI event consumer registrations. You can use this module to register for, and respond to, system-level events available to WMI.PPL Daily Report Helper: Daily Reporting Helper Tool for Phoenix Propulsion LabsRandom Passwd Generator: This is a simple program developed in C# that generates random passwords of the specified length with the specified characters to be used. It's in beta version.SharePoint MUI Manager: The SharePoint MUI Manager allows you to translate user-specified text, such as the Title and Description of the site, throught the web interface. There is no need to download, edit and upload a RESX file. Sqlite Client for Windows Phone: Sqlite client for Windows Phone 7 . Supports transactionsTouchToolkit: A toolkit to simplify the multi-touch application development and testing complexities. It currently supports WPF and Silverlight.TSI4: Proyecto para facultad de ingenieríaVS2010 Debugger Visualizers Contrib: This project is for hosting user-contributed debugger visualizers for Microsoft Visual Studio 2010.Windows Shell Framework: Windows Shell Framework is a managed wrappers for a subset of the windows shell. This Project is for of .NET Shell Namespace Extension FrameworkWork in Progress: Work in progressWPFtest: A simpel test project for experimenting with WPF.YingYangXonix: YingYangXonixZeroUnit.net: The zero dependency, zero friction, sugar free Unit Testing framework for .Net.ZXing barcode for Windows Phone: Barcode support for Windows Phone 7 using ZXing

    Read the article

  • Introducing Data Annotations Extensions

    - by srkirkland
    Validation of user input is integral to building a modern web application, and ASP.NET MVC offers us a way to enforce business rules on both the client and server using Model Validation.  The recent release of ASP.NET MVC 3 has improved these offerings on the client side by introducing an unobtrusive validation library built on top of jquery.validation.  Out of the box MVC comes with support for Data Annotations (that is, System.ComponentModel.DataAnnotations) and can be extended to support other frameworks.  Data Annotations Validation is becoming more popular and is being baked in to many other Microsoft offerings, including Entity Framework, though with MVC it only contains four validators: Range, Required, StringLength and Regular Expression.  The Data Annotations Extensions project attempts to augment these validators with additional attributes while maintaining the clean integration Data Annotations provides. A Quick Word About Data Annotations Extensions The Data Annotations Extensions project can be found at http://dataannotationsextensions.org/, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4.  You can find a current list of the validation attributes on the afore mentioned website. The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required). On to the Preview Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project: public class Customer { public string Email { get; set; } public int Age { get; set; } public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself).  It should look something like this: With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number!  With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes.  However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string.  Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get: public class Customer { [Email] [Required] public string Email { get; set; }   [Integer] [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")] [Required] public int Age { get; set; }   [FileExtensions("png|jpg|jpeg|gif")] public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s try to put in some invalid values and see what happens: That is very nice validation, all done on the client side (will also be validated on the server).  Also, the Customer class validation attributes are very easy to read and understand. Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required! Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project. Adding Data Annotations Extensions To Your Project First we will File->New Project and create an ASP.NET MVC 3 project.  I am going to use Razor for these examples, but any view engine can be used in practice.  Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.”  You should see the following two packages: The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install.  This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3). Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation.  If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically.  However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with. Creating Your Model In the Models folder, create a new User.cs file with a User class that you can use as a model.  To start with, I’ll use the following class: public class User { public string Email { get; set; } public string Password { get; set; } public string PasswordConfirm { get; set; } public string HomePage { get; set; } public int Age { get; set; } } Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling).  Your files will look something like this: UserController.cs: public class UserController : Controller { public ActionResult Create() { return View(new User()); }   [HttpPost] public ActionResult Create(User user) { if (!ModelState.IsValid) { return View(user); }   return Content("User valid!"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Create.cshtml: @model NuGetValidationTester.Models.User   @{ ViewBag.Title = "Create"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>User</legend> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page).  These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions.  These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type. Now when we go to /User/Create we should see a form for editing a User Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0).  With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions.  The new and improved User.cs model class. { [Required] [Email] public string Email { get; set; }   [Required] public string Password { get; set; }   [Required] [EqualTo("Password")] public string PasswordConfirm { get; set; }   [Url] public string HomePage { get; set; }   [Integer] [Min(1)] public int Age { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s re-run our form and try to use some invalid values: All of the validation errors you see above occurred on the client, without ever even hitting submit.  The validation is also checked on the server, which is a good practice since client validation is easily bypassed. That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily. Nitpickers Corner ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo.  Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project.  Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3. Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages. Available Validators: Please see the project site at http://dataannotationsextensions.org/ for an up-to-date list of the new validators included in this project.  As of this post, the following validators are available: CreditCard Date Digits Email EqualTo FileExtensions Integer Max Min Numeric Url Conclusion Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs. The Data Annotations Extensions project is fully open source under the BSD license.  Any feedback would be greatly appreciated.  More information than you require, along with links to the source code, is available at http://dataannotationsextensions.org/. Enjoy!

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • lxc containers hangs after upgrade to 13.10

    - by doug123
    I have 3 lxc containers. They were all working fine on 12.10 and I upgraded the containers with do-release-upgrade on the containers to 13.04 and 13.10 and that worked great. Then I upgraded the host to 13.04 and then 13.10 and now the 3 containers hang with this: >lxc-start -n as1 -l DEBUG -o $(tty) lxc-start 1383145786.513 INFO lxc_start_ui - using rcfile /var/lib/lxc/as1/config lxc-start 1383145786.513 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1383145786.513 INFO lxc_apparmor - aa_enabled set to 1 lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/13' (7/8) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/14' (9/10) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/15' (11/12) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/17' (13/14) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/18' (15/16) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/19' (17/18) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/20' (19/20) lxc-start 1383145786.514 INFO lxc_conf - tty's configured lxc-start 1383145786.514 DEBUG lxc_start - sigchild handler set lxc-start 1383145786.514 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1383145786.514 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1383145786.514 DEBUG lxc_console - 6242 got SIGWINCH fd 25 lxc-start 1383145786.514 DEBUG lxc_console - set winsz dstfd:22 cols:177 rows:53 lxc-start 1383145786.514 INFO lxc_start - 'as1' is initialized lxc-start 1383145786.522 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1383145786.524 DEBUG lxc_conf - mac address of host interface 'vethB4L35W' changed to private fe:7c:96:a0:ae:29 lxc-start 1383145786.525 DEBUG lxc_conf - instanciated veth 'vethB4L35W/vethVC61K2', index is '26' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.529 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.555 DEBUG lxc_conf - move 'eth0' to '6249' lxc-start 1383145786.555 INFO lxc_conf - 'as1' hostname has been setup lxc-start 1383145786.575 DEBUG lxc_conf - 'eth0' has been setup lxc-start 1383145786.575 INFO lxc_conf - network has been setup lxc-start 1383145786.575 INFO lxc_conf - looking at .44 42 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /. lxc-start 1383145786.575 INFO lxc_conf - looking at .52 44 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.575 INFO lxc_conf - looking at .61 52 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.575 INFO lxc_conf - looking at .68 44 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run. lxc-start 1383145786.575 INFO lxc_conf - looking at .69 68 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.575 INFO lxc_conf - looking at .72 68 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.575 INFO lxc_conf - looking at .73 68 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.575 INFO lxc_conf - looking at .76 44 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.575 INFO lxc_conf - looking at .77 76 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.575 INFO lxc_conf - looking at .78 77 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.575 INFO lxc_conf - looking at .79 77 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.575 INFO lxc_conf - looking at .80 77 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.575 INFO lxc_conf - looking at .81 77 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.575 INFO lxc_conf - looking at .82 77 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.575 INFO lxc_conf - looking at .83 77 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.575 INFO lxc_conf - looking at .84 77 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.575 INFO lxc_conf - looking at .85 77 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.575 INFO lxc_conf - looking at .94 77 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.575 INFO lxc_conf - looking at .95 77 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.575 INFO lxc_conf - looking at .96 76 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.575 INFO lxc_conf - looking at .98 76 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.575 INFO lxc_conf - looking at .101 76 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.575 INFO lxc_conf - looking at .102 76 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.575 INFO lxc_conf - looking at .103 44 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.575 INFO lxc_conf - looking at .104 44 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /data. lxc-start 1383145786.575 INFO lxc_conf - looking at .105 44 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.575 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.576 DEBUG lxc_conf - mounted '/data/srv/lxc/as1' on '/usr/lib/x86_64-linux-gnu/lxc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//dev/pts', type 'devpts' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//proc', type 'proc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//sys', type 'sysfs' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//run', type 'tmpfs' lxc-start 1383145786.576 INFO lxc_conf - mount points have been setup lxc-start 1383145786.577 INFO lxc_conf - console has been setup lxc-start 1383145786.577 INFO lxc_conf - 8 tty(s) has been setup lxc-start 1383145786.577 INFO lxc_conf - rootfs path is ./data/srv/lxc/as1., mount is ./usr/lib/x86_64-linux-gnu/lxc. lxc-start 1383145786.577 INFO lxc_apparmor - I am 1, /proc/self points to 1 lxc-start 1383145786.577 DEBUG lxc_conf - created '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' directory lxc-start 1383145786.577 DEBUG lxc_conf - mountpoint for old rootfs is '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' lxc-start 1383145786.577 DEBUG lxc_conf - pivot_root syscall to '/usr/lib/x86_64-linux-gnu/lxc' successful lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/lock' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/shm' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/user' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuacct' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/hugetlb' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/pstore' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/proc' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/data' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/boot' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold' lxc-start 1383145786.577 INFO lxc_conf - created new pts instance lxc-start 1383145786.578 DEBUG lxc_conf - drop capability 'sys_boot' (22) lxc-start 1383145786.578 DEBUG lxc_conf - capabilities have been setup lxc-start 1383145786.578 NOTICE lxc_conf - 'as1' is setup. lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.578 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.578 INFO lxc_apparmor - setting up apparmor lxc-start 1383145786.578 INFO lxc_apparmor - changed apparmor profile to lxc-container-default lxc-start 1383145786.578 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1383145786.578 INFO lxc_conf - looking at .15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.578 INFO lxc_conf - looking at .16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.578 INFO lxc_conf - looking at .17 20 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.578 INFO lxc_conf - looking at .18 17 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.578 INFO lxc_conf - looking at .19 20 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /run. lxc-start 1383145786.578 INFO lxc_conf - looking at .20 1 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.578 INFO lxc_conf - now p is . /. lxc-start 1383145786.578 INFO lxc_conf - looking at .22 15 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.578 INFO lxc_conf - looking at .23 15 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.578 INFO lxc_conf - looking at .24 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.579 INFO lxc_conf - looking at .25 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.579 INFO lxc_conf - looking at .26 19 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.579 INFO lxc_conf - looking at .27 19 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.579 INFO lxc_conf - looking at .28 22 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.579 INFO lxc_conf - looking at .29 19 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.579 INFO lxc_conf - looking at .30 15 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.579 INFO lxc_conf - looking at .31 22 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.579 INFO lxc_conf - looking at .32 22 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.579 INFO lxc_conf - looking at .33 22 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.579 INFO lxc_conf - looking at .34 22 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.579 INFO lxc_conf - looking at .35 22 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.579 INFO lxc_conf - looking at .36 22 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.579 INFO lxc_conf - looking at .37 22 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.579 INFO lxc_conf - looking at .38 22 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.579 INFO lxc_conf - looking at .39 20 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.579 INFO lxc_conf - now p is . /data. lxc-start 1383145786.579 INFO lxc_conf - looking at .40 20 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.579 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.579 INFO lxc_conf - looking at .41 22 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.579 NOTICE lxc_start - '/sbin/init' started with pid '6249' lxc-start 1383145786.579 WARN lxc_start - invalid pid for SIGCHLD <4>init: ureadahead main process (7) terminated with status 5 <4>init: console-font main process (94) terminated with status 1 And it will just sit there like that for hours at least. The container becomes pingable but I can't ssh and if I try lxc-console -n as1 I get a blank screen. If I do lxc-stop -n as1 or ^C in the window where it has hung I get: ^CTERM environment variable not set. <4>init: plymouth-upstart-bridge main process (192) terminated with status 1 <4>init: hwclock-save main process (187) terminated with status 70 * Asking all remaining processes to terminate... ...done. * All processes ended within 1 seconds... ...done. * Deactivating swap... ...fail! mount: cannot mount block device /dev/md2 read-only * Will now restart But after 20 minutes it hasn't restarted. Any ideas why these containers are hanging?

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >