Search Results

Search found 696 results on 28 pages for 'eddie marks'.

Page 28/28 | < Previous Page | 24 25 26 27 28 

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • Adding UL to jQuery UI Tabs

    - by Dave Kiss
    It seems like whenever I try to add a UL inside of the containers when using jQuery UI - Tabs, it breaks the javascript. Is there a way I can use a UL inside of these containers that I am missing? Thanks <div id="tabs"> <div id="fragment-1"> <h4>Pre-Press Requirements</h4> ? SAMPLE of final artwork with noted sizes. ? NATIVE FILES & High Resolution PDF preferred. Files must be created or saved to the listed accepted file formats. ? FONTS used in files need to be supplied in a separate folder marked "fonts". Please ensure all families (screen & printer) fonts are supplied for the job. ? IMAGES must be saved as CMYK and no less than 300dpi. NO RGB FILES! We prefer images to be TIF or EPS formats. If you are submitting artwork for spot color printing vector artwork is preferred. Your images should be supplied in a separate folder marked "links". This will ensure proper reproduction of your artwork. ? COLORS need to be clearly specified. Pantone (PMS) colors preferred. Please specify if job is to be printed CMYK, spot color, etc. ? BLEEDS should be no less than .25" ? PDF's should be High Resolution. Please include any spot colors or CMYK format for full color printing. NO RGB FILES! All bleeds should be included with trim marks. All fonts must be embedded or outlined. No "layered" PDF files. </div> <div id="fragment-2"> <p>Our presses are all capable of sizes up to 11" x 17" using spot color or full color.</p> </div> <div id="fragment-3"> <p>USA Quickprint has complete in house bindery to finish each job to meet your needs.</p> <ul> <li>Folding</li> <li>Scoring</li> <li>Perforation</li> <li>Drilling</li> <li>Shrink Wrap</li> <li>Trimming</li> <li>Collating</li> <li>Spiral Binding</li> <li>GBC Binding</li> <li>Padding</li> <li>Stapling</li> <li>Numbering</li> <li>Lamination</li> </ul> </div>

    Read the article

  • Help with specific Regex: need to match multiple instances of multiple formats in a single string.

    - by KevenK
    I apologize for the terrible title...it can be hard to try to summarize an entire situation into a single sentence. Let me start by saying that I'm asking because I'm just not a Regex expert. I've used it a bit here and there, but I just come up short with the correct way to meet the following requirements. The Regex that I'm attempting to write is for use in an XML schema for input validation, and used elsewhere in Javascript for the same purpose. There are two different possible formats that are supported. There is a literal string, which must be surrounded by quotation marks, and a Hex-value string which must be surrounded by braces. Some test cases: "this is a literal string" <-- Valid string, enclosed properly in "s "this should " still be correct" <-- Valid string, "s are allowed within (if possible, this requirement could be forgiven if necessary) "{00 11 22}" <-- Valid string, {}'s allow in strings. Another one that can be forgiven if necessary I am bad output <-- Invalid string, no "s "Some more problemss"you know <-- Invalid string, must be fully contained in "s {0A 68 4F 89 AC D2} <-- Valid string, hex characters enclosed in {}s {DDFF1234} <-- Valid string, spaces are ignored for Hex strings DEADBEEF <-- Invalid string, must be contained in either "s or {}s {0A 12 ZZ} <-- Invalid string, 'Z' is not a valid Hex character To satisfy these general requirements, I had come up with the following Regex that seems to work well enough. I'm still fairly new to Regex, so there could be a huge hole here that I'm missing.: &quot;.+&quot;|\{([0-9]|[a-f]|[A-F]| )+\} If I recall correctly, the XML Schema regex automatically assumes beginning and end of line (^ and $ respectively). So, essentially, this regex accepts any string that starts and ends with a ", or starts and ends with {}s and contains only valid Hexidecimal characters. This has worked well for me so far except that I had forgotten about another (although less common, and thus forgotten) input option that completely breaks my regex. Where I made my mistake: Valid input should also allow a user to separate valid strings (of either type, literal/hex) by a comma. This means that a single string should be able to contain more than one of the above valid strings, separated by commas. Luckily, however, a comma is not a supported character within a literal string (although I see that my existing regex does not care about commas). Example test cases: "some string",{0A F1} <-- Valid {1122},{face},"peanut butter" <-- Valid {0D 0A FF FE},"string",{FF FFAC19 85} <-- Valid (Spaces don't matter in Hex values) "Validation is allowed to break, if a comma is found not separating values",{0d 0a} <-- Invalid, comma is a delimiter, but "Validation is allowed to break" and "if a comma..." are not marked as separate strings with "s hi mom,"hello" <-- Invalid, String1 was not enclosed properly in "s or {}s My thoughts are that it is possible to use commas as a delimiter to check each "section" of the string to match a regex similar to the original, but I just am not that advanced in regex yet to come up with a solution on my own. Any help would be appreciated, but ultimately a final solution with an explanation would just stellar. Thanks for reading this huge wall of text!

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • CodePlex Daily Summary for Wednesday, February 17, 2010

    CodePlex Daily Summary for Wednesday, February 17, 2010New ProjectsAcademic Success Accounting System: The system is intended to use by school teacher to set marks to students and estimate their academic success and possibilities. The client applicat...Access.PowerTools: Access PowerTools is currently a sample MS Access add-in project to try & test features of Add-in Express™ 2009 for Microsoft® Office and .net (ht...AntoonCms: AntoonCms makes it easy to maintain a simple website with it's builtin administration pages. It's developed in C# on target Framework 2.0 The CMS...ASP.NET MVC Mehr Lib: Mehr Lib makes it easier for ASP.NET MVC developers to do develop projects. It's developed in C#. This version currently include Ajax master detail...BCryptTool: Developer tool that calculates BCrypt hash codes for strings. BCrypt is an implementation of the Blowfish cipher and a computationally-expensive ha...Coronasoft Cryostasis scripting engine: A scripting engine that allows you to dynamically load plugins from just about any supported .NET language. Its written in C#. Languages supported ...Critical Point Search: Critical Point Searchcritical points: critical pointsFont Family Name Retrieval: This library helps developer to retrieve the font family name from the TTF, OTF and TTC font files, so that developer can display the font without ...jQuery Form Input Hints Plugin: Automatically display hints on input textboxes in your forms using this jQuery plugin. I wrote this code to be as simple and as easy to use as pos...Kojax: kojax projectKronRetro: KronRetro! Making a Habbo Retro just got easier! Powered by PHP & MySQL you can make a Habbo Retro site fast!MVVM Wrapper Kit: MVVM Wrapper Kit makes it easier for View Model programmers to wrap their business objects and collections while preserving change notification and...ObjectCartographer: ObjectCartographer is an object to object mapper and object factory. It's developed in C#.PE-file Reader Writer API (PERWAPI): PERWAPI is a reader writer module for .NET program executables. It has been used as back-end for progamming language compilers such as Gardens Poi...Pinger: A simple Pinger, pings an address until you press a buttonQPV: 0.1: QPV aka Que pelicula es una aplicacion que consiste crear una base de datos potente de peliculas, criticas e informacion para poder filtrar pelicul...SIMD Detector: This SIMD class helps developers to detect the types of SIMD instruction available on users' processor. It supports Intel and AMD CPUs. It is writt...StackOverflow Test Project: Following Andrew Siemer's StackOverflow Knowledge Exchange Project.WeBlog: A blogging platform built on the MVC framework The project will showcase current technologies such as MVC 2, Silverlight 4 and jQuery 1.4. Data pro...Webmedia: this is my webmedia projectWindows Azure RSS Reader: This is and online RSS reader based on the Windows Azure platformWordEditor. A Word Editor for Windows, and an extended RichTextBox control.: This is a word editor that can be used as a stand alone word processor, or added to an existing project.Домашняя Бухгалтерия: Программа для ведения домашнего бухгалтерского учета финансов. New ReleasesAccess.PowerTools: Access PowerTools Add-In Community Edition v.0.0.1: Access PowerTools Add-In Community Edition v.0.0.1 is a sample MS Access add-in project to try & test Add-in Express™ 2009 for Microsoft® Office an...Active CSS: ActiveCSS-0.1.1: revision for version 0.1ASP.NET: Microsoft Ajax Minifier 4.0: The Microsoft Ajax Minifier enables you to improve the performance of your Ajax applications by reducing the size of your Cascading Style Sheet and...ASP.NET MVC Mehr Lib: V1.0: Mehr Lib V1.0 This version currently include ajax master detail combo facilities.ASP.net Ribbon: Version 1.2: New controls : Expandable gallery Color Picker Multi color File Menu Some JS modifications. Some CSS modifications. Includes some functionna...ASP.NET Web Forms Model-View-Presenter (MVP) Contrib: WebForms MVP Contrib CTP6: This is a release of the WebForms MVP Contrib project for WebForms MVP CTP6. Release includes: WebForms MVP Contrib framework Ninject IoC containerAwesomiumDotNet: AwesomiumDotNet 1.2.1: - Added Awesomium 1.5 features: URL filtering, header rewrite rules, SetOpensExternalLinksInCallingFrame. - Numerous fixes and improvements.BCryptTool: BCryptTool v0.1: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Buzz Dot Net: Buzz Dot Net v.1.10216: Features Parse Google Buzz feed to Objects Partial MVVM Implementation Partial OptimizationsCanvas VSDOC Intellisense: v1.0.0.0a: canvas-vsdoc.js and canvas-utils.js JavaScript intellisense for HTML5 Canvas element.CheckHeader: CheckHeader v0.8.5: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Claymore MVP: Claymore 1.0.2.0: Changelog Added ASP.NET WebForm support via ClaymoreHttpModule class. Added xsd schema for Visual Studio Intellisense within App.config and Web....Dam Gd - URL Shortner: Dam.gd Version 1.1: This is the latest instalment in our URL shortner. It uses The Easy API http://theeasyapi.com to access data that is used for the back-end analyti...D-AMPS: D-AMPS 0.9.1: Initial version.easySMS: easySMS 1.0 Source code: easySMS 1.0 Source codeFont Family Name Retrieval: 1st Release: Version 1.0.0Free Silverlight & WPF Chart Control - Visifire: Visifire Now Supports DataBinding: Hi, Today we are releasing the much awaited DataBinding feature in Visifire 3.0.3 beta 3. Now you can Bind any DataSource at the Series level so t...GenerateTypedBamApi: Version 2.0: Changes in this release: NEW: Export functionality no longer requires Excel to be installed (uses OLE DB vs. Excel Automation; also enables usage i...Gmail Notifier 2: GmailNotifier2 1.2.1: Fixes issues #9652, #9653iTuner - The iTunes Companion: iTuner 1.1.3699: This includes the first pass of the iTuner Librarian including management of dead tracks, duplicates, and empty directories... While I promised a ...jQuery Form Input Hints Plugin: jQuery.InputHints v1.0: jQuery.InputHints v1.0 Includes Standard & minified source Demo HTML file VS2008 SolutionLibWowArmory: LibWowArmory 0.2.3 beta: LibWowArmory 0.2.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.2. Changes since version 0.2.2:Update...Managed Extensibility Framework: MEF Preview 9: We have merged the .net 3.5 and Silverlight 3 into a single zip. The bin folder contains the binaries for .net 3.5 whereas bin\SL contains the bina...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-1.3.3.0-6571.msi: February 16, 2010 * MdxDesigner: Fix for the issue where when an element is clicked, the mouse wheel stops working until the cursor leaves and r...MEFGeneric: MEFGeneric Preview 9: MEFGeneric Preview 9 release.Mesopotamia Experiment: Mesopotamia 1.2.26: Bug Fixes - mud map - progress window - recycle app domains on robotics engine crashes( in command prompt and visual, major work) - fixed rooomba h...Microsoft Solution Framework for Business Intelligence in Media: Release 1.0: This is the public release of the Microsoft Solution Framework for Business Intelligence in Media (Release 1.0).MVVM Wrapper Kit: MVVM Wrapper Beta: A simple test project is included to get you up and running, and wrapping those business objects.nBayes - Bayesian Filtering in C#: nBayes v0.2: nBayes' indexing system is factored in such a way that you can easily replace the index with a custom implementation. This release introduces an ad...NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.5: 3.6.0.5 16-February-2010 - Fix: SqlAzManSid Class. "Equals" matches object signiture instead of IAzManSid signiture. When a real null object is pas...ObjectCartographer: ObjectCartographer Code 1.0: This is the first release and contains code to help with object to object mapping (including mapping from one object to multiple objects), object f...Office Apps: 0.8.6: Bug fix's, added Calendar.OI - Open Internet: OI HTML and .XAP files (OI offline): this is the HTML code and the XAP file. please right-click the app at http://bit.ly/openinternet and select "install openinternet application to th...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.3: Perwapi version 1.1.3 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Pinger: Pinger 1.0.0.0 Binary: The Latest BinaryRNA Comparative Analysis Software Tools: RNA Comparative Analysis Software Tools 2.0: RNA Comparative Analysis Software Tools Version 2.0 Note: The RNA Comparative Analysis Software Tools are provided as is, without any warranty. No...SAL- Self Artificial Learning: Artificial Learning working proof of concept: This is a working proof of concept. It includes the Dev version (in .zip format) and the consumer version (in .exe format)SharePoint Management PowerShell scripts: SharePoint 2010 PowerShell Scripts: All the SharePoint 2010 PowerShell Scripts The first file is an Excel 2010 file allowing to find quiclky and easily the new cmdlets available wi...SIMD Detector: 1st Release: Version 1.3 Supports MMX/MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a, SSE5, 3DNow.Terminals: Terminals 1.9 Beta Release: This is a beta release so the new features being added to terminals can be tested properly. The major change in this release is that Terminals has...Text Designer Outline Text Library: 9th minor release: Added the ability to select brush, such as gradient brush or texture brush for the text body. Added CSharp library, TextDesignerCSLibrary. Manage...VivoSocial: VivoSocial 7.0.2: This release has several updated modules. See the Support Forums for more details. Since we update modules very often, we will be changing how we d...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.6.00: changes CKEditor Upgrade to Version 3.2 SVN 5132 File Browser: After File Upload, File will be Auto Selected File Browser: Icons are corrected ...WordEditor. A Word Editor for Windows, and an extended RichTextBox control.: WordEditor Source Code: This contains the latest solution file, with all project files included.Домашняя Бухгалтерия: Alapha Realease: Принимаются ваши предложения по дизайну и функциональности программы.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSimple SavantNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelSharpyjQuery Library for SharePoint Web ServicesFluent Validation for .NET

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • CodePlex Daily Summary for Tuesday, May 18, 2010

    CodePlex Daily Summary for Tuesday, May 18, 2010New ProjectsCafeControl: Supports Remote LOGIN,Remote logout ,Account Creation ,Account LOGOUT ,Temporary Login ,SMS Reported and many other features Requires .net 4.0 Cloud & Contacts: Cloud Contacts makes it easier for share your contacts.Cow connect: Ziel des Projektes Cow connect, ist es ein Tool zu schrieben das Verschiedene Datenbanken, unterschiedlicher Herdenmanagement Tool z.b: Helm Multik...DNN Simple Blog: A simplified blog module that adheres to the DNN API and is designed for a single blog author per module instance. The module also makes use of Web...dotSpatial: dotSpatial is an open source project focused on developing a core set of GIS and mapping libraries that live together harmoniously in the System.Sp...Dynamic Survey Forms - SharePoint Web Part: Create manage dynamic survey forms as SharePoint web part. Record survey score for logged user or for someone else. This project has been designed ...EDXL Sharp: EDXLSharp is a C# / .NET 3.5 implementation of the OASIS Emergency Data Exchange Language (EDXL) family of standards. This set of libraries can be...EPiServer CMS 6 Visual Studio Project Template for VB w/ Public Templates: This is a Visual Studio 2008 Project Template with will allow the creation of a EPiServer CMS 6 project set up as and with all code in Visual Basic...Functional Command Toolbar for Windows: A floating window with a text box for typing functional command and executing it. The tool supports .NET addin, functional scripting and other feat...GameFX: Silverlight Game Development LibraryLightweight Fluent Workflow: ObjectFlow is a lightweight workflow engine for creating & executing workflows. The fluent interface makes it easy to define and understand workf...LinqSpecs: A toolset for use the specification pattern in linq queries.Money Watch: Personal Finances management system written in C#, NHibernate and SQL express.Multi-screen Viewer: This viewer allows to open and view pdf file (presentation) on multiple screens. There is no need to see the file in fullscreen on each screen (mon...neo-tsql: set of stored procedures and functions for sql serverNHTrace: NHTrace is a tool for tracing sql commands executed by NHibernate with syntax highlighting.NQueue: NQueue provides an enterprise level work scheduling and execution framework and toolset that contains no single point of failure. Using a farm of s...Online Cash Manager: Online Cash Manager based on ASP .NET MVC2 VS 2010 RTM MVC 2POCO Bridge: Bridging Silverlight and full .NET apps.REngine - game engine in Silverlight: REngine makes it easier for game developers to develop games in Microsoft Silverlight. RunAs Launcher: RunAs Launcher is a C# utility that provides a GUI for running applications under different credentials. It works in situations where the built-in ...secs4net: SECS-II/GEM/HSMS implementation on .NET. This library provide easy way to communicate with SEMI standard compatible device.SharePoint Admin Dashboard: SharePoint Dashboard for admins. Allows lightening fast multiple server management. RDP doesn't scale. Manage 10 servers easier than 1 with i...Silver spring: saltSocial Map: Social map is a social network based on geograpghical informationTweetZone: TweetZone is new type of twitter client application include DATABASE in it, and it shows you STATS. This Application's cache makes it faster to acc...Yet Another Database Viewer: Yet Another Database Viewer is developed for a basic database view and editing so you don't have to install anything. It's developed in c#.New Releases3FD - Framework For Fast Development: Alpha 1: The first test release. There is still some bugs, but it is functional. The garbage collector is showing memory leaking that must be corrected in t...Ajax Toolkit for ASP.NET MVC: MvcAjaxToolkit gridext with ContextMenu and Tmpl: MvcAjaxToolkit gridext with ContextMenu and Tmpl gridext is a extension for flexigridASP.NET MVC Extensions: SP1 Preview: SP1 Preview ========= 1. Autofac support added. 2. Changed Windsor Adapter. IWindsorInstaller is used instead of IModule.Book Cataloger: BookCataloger1.0.7a: New Features: Author editor form prototype Improvements: .NET Framework 4.0 required Input checking improved Comment edit loads and saves text...Braintree Client Library: Braintree-2.2.0: Prevent race condition when pulling back collection results -- search results represent the state of the data at the time the query was run Renam...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta 2: Documentation New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. Simplified test fixtures. Un-Refactored the not-so-simple MVP pa...dotNetTips: dotNetTips.Utility 3.5 R3: This is a new release (version 3.5.0.4) compatible with .NET 3.5. Requires SP1 if using the Entity Framework extensions. This is a minor update fro...Dynamic Survey Forms - SharePoint Web Part: Dynamic Survey forms for SharePoint. Alpha 1.0.1: Alpha release. Before running installer create database from script attached in zip file. In your web.config make sure your first connection strin...Event Scavenger: Viewer version 3.2.1: Added quick filters on event source and event id dialog boxes. Collector and Admin tool unaffected.Expression Blend Samples: Blend 4 Sketch Mockups Library: This library provides a set of commonly needed controls, icons and cursors to use in SketchFlow applications. After running the installer, create ...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.3: Fluent Ribbon Control Suite 1.3(supports .NET 3.5 and .NET 4 RTM) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples Found...Home Access Plus+: v4.2.2: Version 4.2.2 Change Log: Changes to how mycomputer handles NTFS permissions File Changes: ~/Bin/HAP.Web.dll ~/Bin/HAP.Web.pdbIdeaNMR: IdeaNMR Web App PreAlpha 0.1: This is the first release.IP Informer: Beta Release: V0.8.0.0 Beta.LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.9: Main updates for this release Fixed Status update. (updates where not correctly passed on to LinkedIn) Added landscape/GSensor support. (Currentl...LINQ to Twitter: LINQ to Twitter Beta v2.0.11: New items added since v1.1 include: Support for OAuth (via DotNetOpenAuth), secure communication via https, VB language support, serialization of ...LinqSpecs: Version 1.0 alpha: This is the alpha version of LinqSpecs.miniTodo: mini Todo version 0.2: ・デザインを透明主体に変更  -件数を表示している部分のドラッグでウィンドウ移動  -上記の部分右クリックで、「最前面に表示」、「全アイテム管理」 ・グラフを日/週/月単位の3種類に増やした ・新規作成、完了時にアニメーション追加。完了時にはサウンドも追加 ・テキスト未入力時は追加ボタンも非表示My Notepad: My Notepad (Beta): This is the Beta version of My Notepad. The software is stable enough for you to use. Enjoy the flexibility of docking and also the all new Syntax ...NHTrace: NHTrace-45713: NHTrace-45713Nito.KitchenSink: Version 8: New features (since Version 5) Fixed null reference bug in ExceptionExtensions. Added DynamicStaticTypeMembers and RefOutArg for dynamically (lat...Nito.LINQ: Beta (v0.5): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2521.102, released 2010-05-14. Breaking changes Corrected internal read-on...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.9: Work on UI templates for associative tables (2-column PK), using users/roles pages as an example. Added templates for Not-In-Group sql and cache-ba...patterns & practices - GAX Extensions Library: GEL for gax2010: This is the GEL for GAX 2010, support Visual Stuido 2010patterns & practices - Smart Client Guidance: Smart Client Software Factory 2010 Documentation: If the right-side pane of the chm file is not displayed correctly, do the following: 1) Download SCSF2010Guide.chm file. 2) Start the windows explo...patterns & practices - Windows Azure Guidance: WAAG - Part 1 - Release Candidate: "Release Candidate" for Part 1 of the Windows Azure Guide Highlights of this release are: Code samples complete. Fixed few bugs on "Dependency Ch...Rawr: Rawr 2.3.17: >Rawr3 Public Beta has been released! Click here for details.< - Lots of improvements to the default data files. There is a known issue with the s...RunAs Launcher: RunAs Launcher 1.2: This is the first version being released to CodePlex. Simply extract the file and run the executable. For those that wish to download the source c...Rx Contrib: V1.5: Bug fixsecs4net: Release 1.0: Notes: Runtime requirement: .Net framework 2.0 SP2 with System.Core(.NET 3.5), System.Threading(Rx for 3.5 SP1)SharePoint Admin Dashboard: SPDashboard v1.0: SPDashboard v1.0ShortURL Creator: ShortURL Creator 1.3.0.0: Added new provider u.nu and minimum UI changesStyleCop+: StyleCop+ 0.7: StyleCop+ is now fully compatible with StyleCop 4.4. The following entities were supported in Advanced Naming Rules: - Delegate - Event - Property...Value Injecter: an aspect oriented mapper: Value Injecter 1.2: ValueInjecter library, Sample solution that contains: web-forms sample project win-forms sample project unit tests samplesVCC: Latest build, v2.1.30517.0: Automatic drop of latest buildVCC: Latest build, v2.1.30517.1: Automatic drop of latest buildVidCoder: 0.4.1: Changes: Marks system as "working" to prevent computer from sleeping during an encode. CPU priority changed to BelowNormal during encodes. Enco...WSP Listener: WSP Listener version 2.0.0.0: This new version includes: All assemblies and required assets in one WSP Seperated code in library assembly Activate the WSP Listener with one...Yet Another Database Viewer: Beta: first release of the programYet another developer blog - Examples: Asynchronous TreeView in ASP.NET WebForms: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET WebForms. This application is acco...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelRawrBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersGMap.NET - Great Maps for Windows Forms & PresentationCassiniDev - Cassini 3.5/4.0 Developers EditionDotNetZip Library

    Read the article

  • New in MySQL Enterprise Edition: Policy-based Auditing!

    - by Rob Young
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For those with an interest in MySQL, this weekend's MySQL Connect conference in San Francisco has gotten off to a great start. On Saturday Tomas announced the feature complete MySQL 5.6 Release Candidate that is now available for Community adoption and testing. This announcement marks the sprint to GA that should be ready for release within the next 90 days. You can get a quick summary of the key 5.6 features here or better yet download the 5.6 RC (under “Development Releases”), review what's new and try it out for yourself! There were also product related announcements around MySQL Cluster 7.3 and MySQL Enterprise Edition . This latter announcement is of particular interest if you are faced with internal and regulatory compliance requirements as it addresses and solves a pain point that is shared by most developers and DBAs; new, out of the box compliance for MySQL applications via policy-based audit logging of user and query level activity. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One of the most common requests we get for the MySQL roadmap is for quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit. MySQL Enterprise Audit The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to their MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to: 1. Easily install the needed components. Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin: [mysqld] plugin-load=audit_log.so (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.) or alternatively you can load the plugin at runtime: mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so'; 2. Dynamically enable and disable the audit stream for a specific MySQL server. A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below. 3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server. The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream: "ALL" - enable audit stream and log all events "LOGINS" - enable audit stream and log only login events "QUERIES" - enable audit stream and log only querie events "NONE" - disable audit stream 4. Manage audit log files using basic MySQL log rotation features. A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging. 5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions. The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases. So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled. So what are your next steps? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Get all of the grainy details on MySQL Enterprise Audit, including all of the additional configuration options from the MySQL documentation. MySQL Enterprise Edition customers can download MySQL 5.5.28 with the Audit extension for production use from the My Oracle Support portal. Everyone can download MySQL 5.5.28 with the Audit extension for evaluation from the Oracle Software Delivery Cloud. Learn more about MySQL Enterprise Edition. As always, thanks for your continued support of MySQL!

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • .NET interview, code structure and the design

    - by j_lewis
    I have been given the below .NET question in an interview. I don’t know why I got low marks. Unfortunately I did not get a feedback. Question: The file hockey.csv contains the results from the Hockey Premier League. The columns ‘For’ and ‘Against’ contain the total number of goals scored for and against each team in that season (so Alabama scored 79 goals against opponents, and had 36 goals scored against them). Write a program to print the name of the team with the smallest difference in ‘for’ and ‘against’ goals. the structure of the hockey.csv looks like this (it is a valid csv file, but I just copied the values here to get an idea) Team - For - Against Alabama 79 36 Washinton 67 30 Indiana 87 45 Newcastle 74 52 Florida 53 37 New York 46 47 Sunderland 29 51 Lova 41 64 Nevada 33 63 Boston 30 64 Nevada 33 63 Boston 30 64 Solution: class Program { static void Main(string[] args) { string path = @"C:\Users\<valid csv path>"; var resultEvaluator = new ResultEvaluator(string.Format(@"{0}\{1}",path, "hockey.csv")); var team = resultEvaluator.GetTeamSmallestDifferenceForAgainst(); Console.WriteLine( string.Format("Smallest difference in ‘For’ and ‘Against’ goals > TEAM: {0}, GOALS DIF: {1}", team.Name, team.Difference )); Console.ReadLine(); } } public interface IResultEvaluator { Team GetTeamSmallestDifferenceForAgainst(); } public class ResultEvaluator : IResultEvaluator { private static DataTable leagueDataTable; private readonly string filePath; private readonly ICsvExtractor csvExtractor; public ResultEvaluator(string filePath){ this.filePath = filePath; csvExtractor = new CsvExtractor(); } private DataTable LeagueDataTable{ get { if (leagueDataTable == null) { leagueDataTable = csvExtractor.GetDataTable(filePath); } return leagueDataTable; } } public Team GetTeamSmallestDifferenceForAgainst() { var teams = GetTeams(); var lowestTeam = teams.OrderBy(p => p.Difference).First(); return lowestTeam; } private IEnumerable<Team> GetTeams() { IList<Team> list = new List<Team>(); foreach (DataRow row in LeagueDataTable.Rows) { var name = row["Team"].ToString(); var @for = int.Parse(row["For"].ToString()); var against = int.Parse(row["Against"].ToString()); var team = new Team(name, against, @for); list.Add(team); } return list; } } public interface ICsvExtractor { DataTable GetDataTable(string csvFilePath); } public class CsvExtractor : ICsvExtractor { public DataTable GetDataTable(string csvFilePath) { var lines = File.ReadAllLines(csvFilePath); string[] fields; fields = lines[0].Split(new[] { ',' }); int columns = fields.GetLength(0); var dt = new DataTable(); //always assume 1st row is the column name. for (int i = 0; i < columns; i++) { dt.Columns.Add(fields[i].ToLower(), typeof(string)); } DataRow row; for (int i = 1; i < lines.GetLength(0); i++) { fields = lines[i].Split(new char[] { ',' }); row = dt.NewRow(); for (int f = 0; f < columns; f++) row[f] = fields[f]; dt.Rows.Add(row); } return dt; } } public class Team { public Team(string name, int against, int @for) { Name = name; Against = against; For = @for; } public string Name { get; private set; } public int Against { get; private set; } public int For { get; private set; } public int Difference { get { return (For - Against); } } } Output: Smallest difference in for' andagainst' goals TEAM: Boston, GOALS DIF: -34 Can someone please review my code and see anything obviously wrong here? They were only interested in the structure/design of the code and whether the program produces the correct result (i.e lowest difference). Much appreciated. "P.S - Please correct me if the ".net-interview" tag is not the right tag to use"

    Read the article

  • Hi i am creating a php calendar i have a Problem in that

    - by udaya
    Hi i am creating a calendar i which i filled the year and date like this <<<<< Year <<<<< month by clicking on the arrow marks the year and month can be increased and decreased now i have to fill the dates for the year and month selected I calculated the first day of month and last date of the month The dates must be start filling from the first day Say if the first day is thursday the date 1 must be on thursday and the next days must follow that till the last date These are my functions in my controller " function phpcal() { $month=04; $day=01; $year=2010; echo date("D", mktime(0,0,0,$month,$day,$year)); //here i am calculating the first day of the month echo '<br>lastdate'.date("t", strtotime($year . "-" . $month . "-01"));'' here i am calculating the lasdt date of the month //echo '<br>'.$date_end = $this->lastOfMonth(); $this->load->view('phpcal'); } function firstOfMonth($m1,$y1) { return date("m/d/Y", strtotime($m1.'/01/'.$y1.' 00:00:00')); } function lastOfMonth() { return date("m/d/Y", strtotime('-1 second',strtotime('+1 month',strtotime(date('m').'/01/'.date('Y').' 00:00:00')))); } function phpcalview() { $year = $this->input->post('yearvv'); $data['year'] = $this->adminmodel->selectyear(); $data['date'] = $this->adminmodel->selectmonth(); //print_r($data['date'] ); $this->load->view('phpcal',$data); } This is my view page <table cellpadding="2" cellspacing="0" border="1" bgcolor="#CCFFCC" align="center" class="table_Style_Border"> <? if(isset($date)) { foreach($date as $row) {?> <tr> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate2'];?></td> <td><?= $row['dbDate3'];?></td> <td><?= $row['dbDate4'];?></td> <td><?= $row['dbDate5'];?></td> <td><?= $row['dbDate6'];?></td> <td><?= $row['dbDate7'];?></td> </tr> <tr bgcolor="#FFFFFF"> <td><?= $row['dbDate8'];?></td> <td><?= $row['dbDate9'];?></td> <td><?= $row['dbDate10'];?></td> <td><?= $row['dbDate11'];?></td> <td><?= $row['dbDate12'];?></td> <td><?= $row['dbDate13'];?></td> <td><?= $row['dbDate14'];?></td> </tr> <tr> <td><?= $row['dbDate15'];?></td> <td><?= $row['dbDate16'];?></td> <td><?= $row['dbDate17'];?></td> <td><?= $row['dbDate18'];?></td> <td><?= $row['dbDate19'];?></td> <td><?= $row['dbDate20'];?></td> <td><?= $row['dbDate21'];?></td> </tr> <tr bgcolor="#FFFFFF"> <td><?= $row['dbDate22'];?></td> <td><?= $row['dbDate23'];?></td> <td><?= $row['dbDate24'];?></td> <td><?= $row['dbDate25'];?></td> <td><?= $row['dbDate26'];?></td> <td><?= $row['dbDate27'];?></td> <td><?= $row['dbDate28'];?></td> </tr> <tr> <td><?= $row['dbDate29'];?></td> <td><?= $row['dbDate30'];?></td> <td><?= $row['dbDate31'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> </tr> </tr> <? }} ?> </table> How can i insert the dates starting from the day i have calculated in the function phpcal

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Hi how to show the results in a datatable while we are using yui

    - by udaya
    Hi I am using yui to display a datagrid ... <?php $host = "localhost"; //database location $user = "root"; //database username $pass = ""; //database password $db_name = "cms"; //database name //database connection $link = mysql_connect($host, $user, $pass); mysql_select_db($db_name); //sets encoding to utf8 $result = mysql_query("select dStud_id,dMarkObtained1,dMarkObtained2,dMarkObtained3,dMarkTotal from tbl_internalmarkallot"); //$res = mysql_fetch_array($result); while($res = mysql_fetch_array($result)) { //print_r($res); $JsonVar = json_encode($res); echo "<input type='text' name='json' id='json' value ='$JsonVar'>"; } //print_r (mysql_fetch_array($result)); //echo "<input type='text' name='json' id='json' value ='$JsonVar'>"; ?> <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <title>Client-side Pagination</title> <style type="text/css"> body { margin:0; padding:0; } </style> <link rel="stylesheet" type="text/css" href="build/fonts/fonts-min.css" /> <link rel="stylesheet" type="text/css" href="build/paginator/assets/skins/sam/paginator.css" /> <link rel="stylesheet" type="text/css" href="build/datatable/assets/skins/sam/datatable.css" /> <script type="text/javascript" src="build/yahoo-dom-event/yahoo-dom-event.js"></script> <script type="text/javascript" src="build/connection/connection-min.js"></script> <script type="text/javascript" src="build/json/json-min.js"></script> <script type="text/javascript" src="build/element/element-min.js"></script> <script type="text/javascript" src="build/paginator/paginator-min.js"></script> <script type="text/javascript" src="build/datasource/datasource-min.js"></script> <script type="text/javascript" src="build/datatable/datatable-min.js"></script> <script type="text/javascript" src="YuiJs.js"></script> <style type="text/css"> #paginated { text-align: center; } #paginated table { margin-left:auto; margin-right:auto; } #paginated, #paginated .yui-dt-loading { text-align: center; background-color: transparent; } </style> </head> <body class="yui-skin-sam" onload="ProjectDatatable(document.getElementById('json').value);"> <h1>Client-side Pagination</h1> <div class="exampleIntro"> </div> <input type="hidden" id="HfId"/> <div id="paginated"> </div> <script type="text/javascript"> /*YAHOO.util.Event.onDOMReady(function() { YAHOO.example.ClientPagination = function() { var myColumnDefs = [ {key:"dStud_id", label:"ID",sortable:true, resizeable:true, editor: new YAHOO.widget.TextareaCellEditor()}, {key:"dMarkObtained1", label:"Name",sortable:true}, {key:"dMarkObtained2", label:"CycleTest1"}, {key:"dMarkObtained3", label:"CycleTest2"}, {key:"dMarkTotal", label:"CycleTest3"}, ]; var myDataSource = new YAHOO.util.DataSource("assets/php/json_proxy.php?"); myDataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; myDataSource.responseSchema = { resultsList: "records", fields: ["dStud_id","dMarkObtained1","dMarkObtained2","dMarkObtained3","dMarkTotal"] }; var oConfigs = { paginator: new YAHOO.widget.Paginator({ rowsPerPage: 15 }), initialRequest: "results=504" }; var myDataTable = new YAHOO.widget.DataTable("paginated", myColumnDefs, myDataSource, oConfigs); return { oDS: myDataSource, oDT: myDataTable }; }(); });*/ </script> <?php echo "m".$res['dMarkObtained1']; echo "m".$res['dMarkObtained2']; echo "m".$res['dMarkObtained3']; echo "Tm".$res['dMarkTotal']; {?><? }?> </body> </html> </body> </html> This is my page where i am fetching the data's from the database function generateDatatable(target, jsonObj, myColumnDefs, hfId) { var root; for (key in jsonObj) { root = key; break; } var rootId = "id"; if (jsonObj[root].length > 0) { for (key in jsonObj[root][0]) { rootId = key; break; } } YAHOO.example.DynamicData = function() { var myPaginator = new YAHOO.widget.Paginator({ rowsPerPage: 10, template: YAHOO.widget.Paginator.TEMPLATE_ROWS_PER_PAGE, rowsPerPageOptions: [5, 25, 50, 100], pageLinks: 10 }); // DataSource instance var myDataSource = new YAHOO.util.DataSource(jsonObj); myDataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; myDataSource.responseSchema = { resultsList: root, fields: new Array() }; myDataSource.responseSchema.fields[0] = rootId; for (var i = 0; i < myColumnDefs.length; i++) { myDataSource.responseSchema.fields[i + 1] = myColumnDefs[i].key; } // DataTable configuration var myConfigs = { sortedBy: { key: myDataSource.responseSchema.fields[1], dir: YAHOO.widget.DataTable.CLASS_ASC }, // Sets UI initial sort arrow paginator: myPaginator }; // DataTable instance var myDataTable = new YAHOO.widget.DataTable(target, myColumnDefs, myDataSource, myConfigs); myDataTable.subscribe("rowMouseoverEvent", myDataTable.onEventHighlightRow); myDataTable.subscribe("rowMouseoutEvent", myDataTable.onEventUnhighlightRow); myDataTable.subscribe("rowClickEvent", myDataTable.onEventSelectRow); myDataTable.subscribe("checkboxClickEvent", function(oArgs) { var hidObj = document.getElementById(hfId); var elCheckbox = oArgs.target; var oRecord = this.getRecord(elCheckbox); var id = oRecord.getData(rootId); if (elCheckbox.checked) { if (hidObj.value == "") { hidObj.value = id; } else { hidObj.value += "," + id; } } else { hidObj.value = removeIdFromArray("" + hfId, id); } }); myPaginator.subscribe("changeRequest", function() { if (document.getElementById(hfId).value != "") { /*if (document.getElementById("ConfirmationPanel").style.display == 'block') { document.getElementById("ConfirmationPanel").style.display = 'none'; }*/ document.getElementById(hfId).value = ""; } return true; }); myDataTable.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; } return { ds: myDataSource, dt: myDataTable }; } (); } function removeIdFromArray(values, id) { values = document.getElementById(values).value; if (values.indexOf(',') == 0) { values = values.substring(1); } if (values.indexOf(values.length - 1) == ",") { values = values.substring(0, values.length - 1); } var ids = values.split(','); var rtnValue = ""; for (var i = 0; i < ids.length; i++) { if (ids[i] != id) { rtnValue += "," + ids[i]; } } if (rtnValue.indexOf(",") == 0) { rtnValue = rtnValue.substring(1); } return rtnValue; } function edityuitable() { var ErrorDiv = document.getElementById("ErrorDiv"); var editId=document.getElementById("ctl00_ContentPlaceHolder1_HfId").value; if(editId.length == 0) { ErrorDiv.innerHTML = getErrorMsgStyle("Select a row for edit"); //alert("Select a row for edit"); return false; } else { var editarray = editId.split(","); if (editarray.length != 1) { ErrorDiv.innerHTML = getErrorMsgStyle("Select One row for edit"); //alert("Select One row for edit"); return false; } else if (editarray.length == 1) { return true; } } } function Deleteyuitable() { var ErrorDiv = document.getElementById("ErrorDiv"); var editId=document.getElementById("ctl00_ContentPlaceHolder1_HfId").value; if(editId.length == 0) { ErrorDiv.innerHTML = getErrorMsgStyle("Select a row for Delete"); return false; } else { return true; } } function ProjectDatatable(HfJsonValue){ alert(HfJsonValue); var myColumnDefs = [ {key:"dStud_id", label:"ID", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained1", label:"Marks", width:200, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained2", label:"Marks1", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkObtained3", label:"Marks2", width:200, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"dMarkTotal", label:"Total", width:150, sortable:true, sortOptions:{defaultDir:YAHOO.widget.DataTable.CLASS_DESC}}, {key:"", formatter:"checkbox"} ]; var jsonObj=eval('(' + HfJsonValue + ')'); var target = "paginated"; var hfId = "HfId"; generateDatatable(target,jsonObj,myColumnDefs,hfId) } // JavaScript Document This is my script page when i load the page i do get the first row from the database but the consequtive data's are not displayed in the alert box how can i receieve the data's in the datagrid

    Read the article

  • Problems extracting information from RSS feed description field

    - by Graeme
    Hi, I've built an iPhone application using the parsing code from the TopSongs sample iPhone application. I've hit a problem though - the feed I'm trying to parse data from doesn't have a separate field for every piece of information (i.e. if it was for a feed about dogs, all the information such as dog type, dog age and dog price is contained in the feed. However, the TopSongs app relies on information having its own tags, so instead of using it uses and . So my question is this. How do I extract this information from the description field so that it can be parsed using the TopSongs parser? Can you somehow extract the dog age, price and type information using Yahoo Pipes and use that RSS feed for the feed? Or is there code that I can add to do it in application? Update: To view the code of my application parser (based on the TopSongs Core Data Apple provided application, see below. Here's a sample of one item from the the actual RSS feed I'm using (the description is longer, and has status,size, and a couple of other fields, but they're all formatted the same.: <item> <title>MOE, MARGRET STREET</title> <description> <b>District/Region:</b>&nbsp;REGION 09</br><b>Location:</b>&nbsp;MOE</br><b>Name:</b>&nbsp;MARGRET STREET</br></description> <pubDate>Thu,11 Mar 2010 05:43:03 GMT</pubDate> <guid>1266148</guid> </item> /* File: iTunesRSSImporter.m Abstract: Downloads, parses, and imports the iTunes top songs RSS feed into Core Data. Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2009 Apple Inc. All Rights Reserved. */ #import "iTunesRSSImporter.h" #import "Song.h" #import "Category.h" #import "CategoryCache.h" #import <libxml/tree.h> // Function prototypes for SAX callbacks. This sample implements a minimal subset of SAX callbacks. // Depending on your application's needs, you might want to implement more callbacks. static void startElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes); static void endElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI); static void charactersFoundSAX(void *context, const xmlChar *characters, int length); static void errorEncounteredSAX(void *context, const char *errorMessage, ...); // Forward reference. The structure is defined in full at the end of the file. static xmlSAXHandler simpleSAXHandlerStruct; // Class extension for private properties and methods. @interface iTunesRSSImporter () @property BOOL storingCharacters; @property (nonatomic, retain) NSMutableData *characterBuffer; @property BOOL done; @property BOOL parsingASong; @property NSUInteger countForCurrentBatch; @property (nonatomic, retain) Song *currentSong; @property (nonatomic, retain) NSURLConnection *rssConnection; @property (nonatomic, retain) NSDateFormatter *dateFormatter; // The autorelease pool property is assign because autorelease pools cannot be retained. @property (nonatomic, assign) NSAutoreleasePool *importPool; @end static double lookuptime = 0; @implementation iTunesRSSImporter @synthesize iTunesURL, delegate, persistentStoreCoordinator; @synthesize rssConnection, done, parsingASong, storingCharacters, currentSong, countForCurrentBatch, characterBuffer, dateFormatter, importPool; - (void)dealloc { [iTunesURL release]; [characterBuffer release]; [currentSong release]; [rssConnection release]; [dateFormatter release]; [persistentStoreCoordinator release]; [insertionContext release]; [songEntityDescription release]; [theCache release]; [super dealloc]; } - (void)main { self.importPool = [[NSAutoreleasePool alloc] init]; if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(importerDidSave:) name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } done = NO; self.dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; [dateFormatter setTimeStyle:NSDateFormatterNoStyle]; // necessary because iTunes RSS feed is not localized, so if the device region has been set to other than US // the date formatter must be set to US locale in order to parse the dates [dateFormatter setLocale:[[[NSLocale alloc] initWithLocaleIdentifier:@"US"] autorelease]]; self.characterBuffer = [NSMutableData data]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:iTunesURL]; // create the connection with the request and start loading the data rssConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // This creates a context for "push" parsing in which chunks of data that are not "well balanced" can be passed // to the context for streaming parsing. The handler structure defined above will be used for all the parsing. // The second argument, self, will be passed as user data to each of the SAX handlers. The last three arguments // are left blank to avoid creating a tree in memory. context = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if (rssConnection != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!done); } // Display the total time spent finding a specific object for a relationship NSLog(@"lookup time %f", lookuptime); // Release resources used only in this thread. xmlFreeParserCtxt(context); self.characterBuffer = nil; self.dateFormatter = nil; self.rssConnection = nil; self.currentSong = nil; [theCache release]; theCache = nil; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] removeObserver:delegate name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importerDidFinishParsingData:)]) { [self.delegate importerDidFinishParsingData:self]; } [importPool release]; self.importPool = nil; } - (NSManagedObjectContext *)insertionContext { if (insertionContext == nil) { insertionContext = [[NSManagedObjectContext alloc] init]; [insertionContext setPersistentStoreCoordinator:self.persistentStoreCoordinator]; } return insertionContext; } - (void)forwardError:(NSError *)error { if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importer:didFailWithError:)]) { [self.delegate importer:self didFailWithError:error]; } } - (NSEntityDescription *)songEntityDescription { if (songEntityDescription == nil) { songEntityDescription = [[NSEntityDescription entityForName:@"Song" inManagedObjectContext:self.insertionContext] retain]; } return songEntityDescription; } - (CategoryCache *)theCache { if (theCache == nil) { theCache = [[CategoryCache alloc] init]; theCache.managedObjectContext = self.insertionContext; } return theCache; } - (Song *)currentSong { if (currentSong == nil) { currentSong = [[Song alloc] initWithEntity:self.songEntityDescription insertIntoManagedObjectContext:self.insertionContext]; } return currentSong; } #pragma mark NSURLConnection Delegate methods // Forward errors to the delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [self performSelectorOnMainThread:@selector(forwardError:) withObject:error waitUntilDone:NO]; // Set the condition which ends the run loop. done = YES; } // Called when a chunk of data has been downloaded. - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(context, (const char *)[data bytes], [data length], 0); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // Signal the context that parsing is complete by passing "1" as the last parameter. xmlParseChunk(context, NULL, 0, 1); context = NULL; // Set the condition which ends the run loop. done = YES; } #pragma mark Parsing support methods static const NSUInteger kImportBatchSize = 20; - (void)finishedCurrentSong { parsingASong = NO; self.currentSong = nil; countForCurrentBatch++; // Periodically purge the autorelease pool and save the context. The frequency of this action may need to be tuned according to the // size of the objects being parsed. The goal is to keep the autorelease pool from growing too large, but // taking this action too frequently would be wasteful and reduce performance. if (countForCurrentBatch == kImportBatchSize) { [importPool release]; self.importPool = [[NSAutoreleasePool alloc] init]; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); countForCurrentBatch = 0; } } /* Character data is appended to a buffer until the current element ends. */ - (void)appendCharacters:(const char *)charactersFound length:(NSInteger)length { [characterBuffer appendBytes:charactersFound length:length]; } - (NSString *)currentString { // Create a string with the character data using UTF-8 encoding. UTF-8 is the default XML data encoding. NSString *currentString = [[[NSString alloc] initWithData:characterBuffer encoding:NSUTF8StringEncoding] autorelease]; [characterBuffer setLength:0]; return currentString; } @end #pragma mark SAX Parsing Callbacks // The following constants are the XML element names and their string lengths for parsing comparison. // The lengths include the null terminator, to ensure exact matches. static const char *kName_Item = "item"; static const NSUInteger kLength_Item = 5; static const char *kName_Title = "title"; static const NSUInteger kLength_Title = 6; static const char *kName_Category = "category"; static const NSUInteger kLength_Category = 9; static const char *kName_Itms = "itms"; static const NSUInteger kLength_Itms = 5; static const char *kName_Artist = "description"; static const NSUInteger kLength_Artist = 7; static const char *kName_Album = "description"; static const NSUInteger kLength_Album = 6; static const char *kName_ReleaseDate = "releasedate"; static const NSUInteger kLength_ReleaseDate = 12; /* This callback is invoked when the importer finds the beginning of a node in the XML. For this application, out parsing needs are relatively modest - we need only match the node name. An "item" node is a record of data about a song. In that case we create a new Song object. The other nodes of interest are several of the child nodes of the Song currently being parsed. For those nodes we want to accumulate the character data in a buffer. Some of the child nodes use a namespace prefix. */ static void startElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // The second parameter to strncmp is the name of the element, which we known from the XML schema of the feed. // The third parameter to strncmp is the number of characters in the element name, plus 1 for the null terminator. if (prefix == NULL && !strncmp((const char *)localname, kName_Item, kLength_Item)) { importer.parsingASong = YES; } else if (importer.parsingASong && ( (prefix == NULL && (!strncmp((const char *)localname, kName_Title, kLength_Title) || !strncmp((const char *)localname, kName_Category, kLength_Category))) || ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_Artist, kLength_Artist) || !strncmp((const char *)localname, kName_Album, kLength_Album) || !strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate))) )) { importer.storingCharacters = YES; } } /* This callback is invoked when the parse reaches the end of a node. At that point we finish processing that node, if it is of interest to us. For "item" nodes, that means we have completed parsing a Song object. We pass the song to a method in the superclass which will eventually deliver it to the delegate. For the other nodes we care about, this means we have all the character data. The next step is to create an NSString using the buffer contents and store that with the current Song object. */ static void endElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; if (importer.parsingASong == NO) return; if (prefix == NULL) { if (!strncmp((const char *)localname, kName_Item, kLength_Item)) { [importer finishedCurrentSong]; } else if (!strncmp((const char *)localname, kName_Title, kLength_Title)) { importer.currentSong.title = importer.currentString; } else if (!strncmp((const char *)localname, kName_Category, kLength_Category)) { double before = [NSDate timeIntervalSinceReferenceDate]; Category *category = [importer.theCache categoryWithName:importer.currentString]; double delta = [NSDate timeIntervalSinceReferenceDate] - before; lookuptime += delta; importer.currentSong.category = category; } } else if (!strncmp((const char *)prefix, kName_Itms, kLength_Itms)) { if (!strncmp((const char *)localname, kName_Artist, kLength_Artist)) { NSString *string = importer.currentSong.artist; NSArray *strings = [string componentsSeparatedByString: @", "]; //importer.currentSong.artist = importer.currentString; } else if (!strncmp((const char *)localname, kName_Album, kLength_Album)) { importer.currentSong.album = importer.currentString; } else if (!strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate)) { NSString *dateString = importer.currentString; importer.currentSong.releaseDate = [importer.dateFormatter dateFromString:dateString]; } } importer.storingCharacters = NO; } /* This callback is invoked when the parser encounters character data inside a node. The importer class determines how to use the character data. */ static void charactersFoundSAX(void *parsingContext, const xmlChar *characterArray, int numberOfCharacters) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // A state variable, "storingCharacters", is set when nodes of interest begin and end. // This determines whether character data is handled or ignored. if (importer.storingCharacters == NO) return; [importer appendCharacters:(const char *)characterArray length:numberOfCharacters]; } /* A production application should include robust error handling as part of its parsing implementation. The specifics of how errors are handled depends on the application. */ static void errorEncounteredSAX(void *parsingContext, const char *errorMessage, ...) { // Handle errors as appropriate for your application. NSCAssert(NO, @"Unhandled error encountered during SAX parse."); } // The handler struct has positions for a large number of callback functions. If NULL is supplied at a given position, // that callback functionality won't be used. Refer to libxml documentation at http://www.xmlsoft.org for more information // about the SAX callbacks. static xmlSAXHandler simpleSAXHandlerStruct = { NULL, /* internalSubset */ NULL, /* isStandalone */ NULL, /* hasInternalSubset */ NULL, /* hasExternalSubset */ NULL, /* resolveEntity */ NULL, /* getEntity */ NULL, /* entityDecl */ NULL, /* notationDecl */ NULL, /* attributeDecl */ NULL, /* elementDecl */ NULL, /* unparsedEntityDecl */ NULL, /* setDocumentLocator */ NULL, /* startDocument */ NULL, /* endDocument */ NULL, /* startElement*/ NULL, /* endElement */ NULL, /* reference */ charactersFoundSAX, /* characters */ NULL, /* ignorableWhitespace */ NULL, /* processingInstruction */ NULL, /* comment */ NULL, /* warning */ errorEncounteredSAX, /* error */ NULL, /* fatalError //: unused error() get all the errors */ NULL, /* getParameterEntity */ NULL, /* cdataBlock */ NULL, /* externalSubset */ XML_SAX2_MAGIC, // NULL, startElementSAX, /* startElementNs */ endElementSAX, /* endElementNs */ NULL, /* serror */ }; Thanks.

    Read the article

< Previous Page | 24 25 26 27 28