Search Results

Search found 15041 results on 602 pages for 'breaking changes'.

Page 57/602 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • jQuery CSS Property Monitoring Plug-in updated

    - by Rick Strahl
    A few weeks back I had talked about the need to watch properties of an object and be able to take action when certain values changed. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects. One example is a drop shadow - if I add a shadow behavior to an object I want the shadow to be pinned to that object so when that object moves I also want the shadow to move with it, or when the panel is hidden the shadow should hide with it - automatically without having to explicitly hook up monitoring code to the panel. For example, in my shadow plug-in I can now do something like this (where el is the element that has the shadow attached and sh is the shadow): if (!exists) // if shadow was created el.watch("left,top,width,height,display", function() { if (el.is(":visible")) $(this).shadow(opt); // redraw else sh.hide(); }, 100, "_shadowMove"); The code now monitors several properties and if any of them change the provided function is called. So when the target object is moved or hidden or resized the watcher function is called and the shadow can be redrawn or hidden in the case of visibility going away. So if you run any of the following code: $("#box") .shadow() .draggable({ handle: ".blockheader" }); // drag around the box - shadow should follow // hide the box - shadow should disappear with box setTimeout(function() { $("#box").hide(); }, 4000); // show the box - shadow should come back too setTimeout(function() { $("#box").show(); }, 8000); This can be very handy functionality when you're dealing with objects or operations that you need to track generically and there are no native events for them. For example, with a generic shadow object that attaches itself to any another element there's no way that I know of to track whether the object has been moved or hidden either via some UI operation (like dragging) or via code. While some UI operations like jQuery.ui.draggable would allow events to fire when the mouse is moved nothing of the sort exists if you modify locations in code. Even tracking the object in drag mode this is hardly generic behavior - a generic shadow implementation can't know when dragging is hooked up. So the watcher provides an alternative that basically gives an Observer like pattern that notifies you when something you're interested in changes. In the watcher hookup code (in the shadow() plugin) above  a check is made if the object is visible and if it is the shadow is redrawn. Otherwise the shadow is hidden. The first parameter is a list of CSS properties to be monitored followed by the function that is called. The function called receives this as the element that's been changed and receives two parameters: The array of watched objects with their current values, plus an index to the object that caused the change function to fire. How does it work When I wrote it about this last time I started out with a simple timer that would poll for changes at a fixed interval with setInterval(). A few folks commented that there are is a DOM API - DOMAttrmodified in Mozilla and propertychange in IE that allow notification whenever any property changes which is much more efficient and smooth than the setInterval approach I used previously. On browser that support these events (FireFox and IE basically - WebKit has the DOMAttrModified event but it doesn't appear to work) the shadow effect is instant - no 'drag behind' of the shadow. Running on a browser that doesn't support still uses setInterval() and the shadow movement is slightly delayed which looks sloppy. There are a few additional changes to this code - it also supports monitoring multiple CSS properties now so a single object can monitor a host of CSS properties rather than one object per property which is easier to work with. For display purposes position, bounds and visibility will be common properties that are to be watched. Here's what the new version looks like: $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } There are basically two jQuery functions - watch and unwatch. jQuery.fn.watch(props,func,interval,id) Starts watching an element for changes in the properties specified. props The CSS properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func (watchData,index) The function fired in response to a changed property. Receives this as the element changed and object that represents the watched properties and their respective values. The first parameter is passed in this structure:    { id: itId, props: [], func: func, vals: [] }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property value that has changed. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. jQuery.fn.unwatch(id) Unhooks watching of the element by disconnecting the event handlers. id Optional watcher id that was specified in the call to watch. This value can be omitted to use the default value of _watcher. You can also grab the latest version of the  code for this plug-in as well as the shadow in the full library at: http://www.west-wind.com:8080/svn/jquery/trunk/jQueryControls/Resources/ww.jquery.js watcher has no other dependencies although it lives in this larger library. The shadow plug-in depends on watcher.© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • CodePlex Daily Summary for Sunday, November 10, 2013

    CodePlex Daily Summary for Sunday, November 10, 2013Popular ReleasesWindow Embedded Compact (CE) Component Wizard: Version 4.0 Improved Compact13Minshell Support: This version will work with Platform Builder for Compact 2013 in Visual Studio 2012 (Update 2) as well as CE 6 (VS2005) and Compact 7 (VS2008) Select files for direct inclusion in the OS when built. Select where shortcuts are placed in the OS FileSystem for them During the build of the subproject, the selected files are copied to FlatRelease directory, along with the BIB file etc, for inclusion in the OS build. A .inf file is also generated along with the subproject for .cab file gener...Media Companion: Media Companion MC3.587b: Fixed* TV - Locked shows display correctly after refresh * TV - missing episodes display in correct colour for missed or to be aired * TV - Rescrape of Multi-episodes working. * TV - Cache fix where was writing episodes multiple times * TV - Fixed Cache writing missing episodes when Display missing eps was disabled. Revision HistoryGenerate report of user mailbox size for Exchange 2010: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Generate-report-of-user-e4e9afcaCheck SQL Server a specified database index fragmentation percentage (SQL): Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Check-SQL-Server-a-a5758043Save attachments from multiple selected items in Outlook (VBA): Script Download: Script Download: http://gallery.technet.microsoft.com/scriptcenter/Save-attachments-from-5b6bf54bRemove Windows Store apps in Windows 8: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4PCSX-Reloaded: 1.9.94: General changes:Support for compressed audio in cue files. ECM support. OS X changes:32-bit support has been dropped Partial French and Hungarian translationsDynamics AX 2012 R2 Kitting: AX 2012 R2 CU7 release of Kitting: Here is the AX 2012 R2 CU7 release of kitting. Released both as a XPO and a model.PantheR's GraphX for .NET: GraphX for .NET RELEASE v1.0.1: PLEASE RATE THIS RELEASE IF YOU LIKED IT! THANKS! :) RELEASE 1.0.1 + Changed ExportToImage() parameters: added useZoomControlSurface param that enables zoom control parent visual space to be used for export instead whole GraphArea panel. Using this technique it is possible to export graphs with negative vertices coordinates. + Added common interface IZoomControl for all included Zoom controls + Added new method GraphArea.GenerateGraph() that accepts only optional parameters and will use in...VidCoder: 1.5.12 Beta: Added an option to preserve Created and Last Modified times when converting files. In Options -> Advanced. Added an option to mark an automatically selected subtitle track as "Default". Updated HandBrake core to SVN 5878. Fixed auto passthrough not applying just after switching to it. Fixed bug where preset/profile/tune could disappear when reverting a preset.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Duplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerVG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesNew Projects3389????? Wpf: 3389?????BitwiseEncoding: Bitwise encodeing with a key and XOR function.C++ language Tests: Just some tests on C++ language features.Check SQL Server a specified database index fragmentation percentage (SQL): This T-SQL sample script illustrates how to check index fragmentation of a specified database in SQL Server. Generate report of user mailbox size for Exchange 2010: This script could be used to export mailboxes’ information to a CSV file, including SamAccountName, DisplayName, TotalItemSize. IUAIMS: AI managment system MineCraftMODDevelopSupportKit: MineCraftMOD?????????? MineCraftMOD development integrated assistance systemRecording Audio in the Browser and Uploading it with ASP.NET MVC: This project is described on blog.falafel.comRemove Windows Store apps in Windows 8: This script can be used to remove multiple Windows Store apps from a user account in Windows 8. It provides a list of installed Windows Store apps. You can speSave attachments from multiple selected items in Outlook (VBA): This VBA sample illustrates how to save attachments from multiple selected items in Outlook.SMW: El presente proyecto es una aplicación orientada al manejo de información de la empresa Jamecl que se encarga del alquiler de camionetasSS-eye-S: An easy to use simplified API surrounding the SSIS components to allow the creation of SSIS packages within code.STAR FOX XNA: Remake of a classic videogame. In this case, that videogame will be Star Fox (1993) for SNES game console. wooyang: ???

    Read the article

  • CodePlex Daily Summary for Monday, November 11, 2013

    CodePlex Daily Summary for Monday, November 11, 2013Popular ReleasesULS Log Viewer Feature: ULS Log Viewer Feature: What's getting installed? - The solution include Custom Action (New action in Site Menu). Features: - Only Site Administrators can see Custom Action in site menu. - Multi-Language SUPPORT! (English, Hebrew). - Works against Secure Store Service to use administrator rights for getting access to ULS Logs Path on farm servers (Use the guide for creating new SSA). - Load only the last log file from ULS log path. - Gets automatically the ULS log path from Central Administration. - Choose from whi...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.3.3: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Magick.NET: Magick.NET 6.8.7.501: Magick.NET linked with ImageMagick 6.8.7.5. Breaking changes: - Refactored MagickImageStatistics to prepare for upcoming changes in ImageMagick 7. - Renamed MagickImage.SetOption to SetDefine.Media Companion: Media Companion MC3.587b: Fixed* TV - Locked shows display correctly after refresh * TV - missing episodes display in correct colour for missed or to be aired * TV - Rescrape of Multi-episodes working. * TV - Cache fix where was writing episodes multiple times * TV - Fixed Cache writing missing episodes when Display missing eps was disabled. Revision HistoryGenerate report of user mailbox size for Exchange 2010: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Generate-report-of-user-e4e9afcaCheck SQL Server a specified database index fragmentation percentage (SQL): Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Check-SQL-Server-a-a5758043Save attachments from multiple selected items in Outlook (VBA): Script Download: Script Download: http://gallery.technet.microsoft.com/scriptcenter/Save-attachments-from-5b6bf54bRemove Windows Store apps in Windows 8: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4PCSX-Reloaded: 1.9.94: General changes:Support for compressed audio in cue files. ECM support. OS X changes:32-bit support has been dropped Partial French and Hungarian translationsDynamics AX 2012 R2 Kitting: AX 2012 R2 CU7 release of Kitting: Here is the AX 2012 R2 CU7 release of kitting. Released both as a XPO and a model.VidCoder: 1.5.12 Beta: Added an option to preserve Created and Last Modified times when converting files. In Options -> Advanced. Added an option to mark an automatically selected subtitle track as "Default". Updated HandBrake core to SVN 5878. Fixed auto passthrough not applying just after switching to it. Fixed bug where preset/profile/tune could disappear when reverting a preset.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Resources Editor: Resources Editor: Stable releaseWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...New ProjectsASP.NET Web API: ASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET WebDungeonPaper: DungeonPaper will be a platform for RPG players to utilize computers to replace traditional pen and paper in their campaigns.EMS: demoEvent Dispatcher: Event Dispatcher is a powershell module for centralized logging. Supports filtering events, and outputs to flat files or the Windows event log.Koopakiller.Numerics: A growing Math-Library for .NET, provided as a portable class library.NetroTrix: NetroTrix is a LAN Messenger Project.....Sharp Explorer: Sort of an alpha Play with C# to make a treeView Windows explorer. Currently multi-threaded dbl click to explore, r-click to preview some text files.TI Sensor Tag Library: A library to access the Texas Instruments Sensor Tag in Windows Store apps with Bluetooth GATT. Written in C#.ULS Log Viewer Feature: "ULS Log Viewer Feature" used by SharePoint Administrators, Developers and SharePoint Site Managers for get easily the real error behind the Correlation ID.Visual Studio Code Line Counter: VS Code Line Counter parses a Visual Studio solution file (.sln), reads each included project, and counts the number of lines in each included file.Vulcanus: TODOWax - The WiX Setup Project Editor: An interactive editor for WiX setup projects.

    Read the article

  • Throttling Cache Events

    - by dxfelcey
    The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>     <distributed-scheme>       <scheme-name>CustomDistributedCacheScheme</scheme-name>       <service-name>CustomDistributedCacheService</service-name>       <thread-count>1</thread-count>       <backing-map-scheme>         <read-write-backing-map-scheme>           <scheme-name>CustomRWBackingMapScheme</scheme-name>           <internal-cache-scheme>             <local-scheme />           </internal-cache-scheme>           <cachestore-scheme>             <class-scheme>               <scheme-name>CustomCacheStoreScheme</scheme-name>               <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>               <init-params>                 <init-param>                   <param-type>java.lang.String</param-type>                   <param-value>{cache-name}</param-value>                 </init-param>                 <init-param>                   <param-type>java.lang.String</param-type>                   <!-- The name of the cache to write events to -->                   <param-value>cqc-test</param-value>                 </init-param>               </init-params>             </class-scheme>           </cachestore-scheme>           <write-delay>1s</write-delay>           <write-batch-factor>0</write-batch-factor>         </read-write-backing-map-scheme>       </backing-map-scheme>       <autostart>true</autostart>     </distributed-scheme>   </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore { private String publishingCacheName; private String sourceCacheName; public CustomCacheStore(String sourceCacheStore, String publishingCacheName) { this.publishingCacheName = publishingCacheName; this.sourceCacheName = sourceCacheName; } @Override public Object load(Object key) { return null; } @Override public Map loadAll(Collection keyCollection) { return null; } @Override public void erase(Object key) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG); } } @Override public void eraseAll(Collection keyCollection) { if (sourceCacheName != publishingCacheName) { for (Object key : keyCollection) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing collection entry: " + key, CacheFactory.LOG_DEBUG); } } } @Override public void store(Object key, Object value) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).put(key, value); CacheFactory.log("Storing entry (key=value): " + key + "=" + value, CacheFactory.LOG_DEBUG); } } @Override public void storeAll(Map entryMap) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).putAll(entryMap); CacheFactory.log("Storing entries: " + entryMap, CacheFactory.LOG_DEBUG); } } }  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

    Read the article

  • Silverlight Connections - Vegas Nov 1-4

    got asked to speak at SilverlightConnections in Vegas in Nov. :) topics include: 'Breaking Down Walls The Story of Getting designers and developers working together in an Agency Environment' and 'Multi Dos and Dont touches Multi Touch Development from the trenches' and 'Going from Silverlight to Phone 7 Silverlight Application Development'http://www.devconnections.com/shows/FALL2010ASP/default.asp?s=151 Title: Breaking Down Walls The Story of Getting designers and developers working together in...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • dinamically resizing row height in QTreeWidget/QTreeView during edinitg items

    - by serge
    Hi everyone, I have some problems with sizing row's height in QTreeWidget. I use QStyledItemDelegate with QPlainTextEdit. During editing text in QPlainTextEdit i check for changes with a help of: rect = self.blockBoundingRect(self.firstVisibleBlock()) and if text it's height changes i need row in QTreeWidget also resizing. But i don't know how to inform TreeWidget or a Delegate about changes, i even tried to initialize editor with index, that i could use in a future, but it suddenly changes: class MyEditor(QPlainTextEdit): def __init__(self, index = None, parent=None): super(MyEditor, self).__init__(parent) self.index = index self.connect(self, SIGNAL("textChanged()"), self.setHeight) def setHeight(self): if self.index: rect = self.blockBoundingRect(self.firstVisibleBlock()) self.resize(rect.width(), rect.height()) self.emit(SIGNAL("set_height"), QSize(rect.width(), rect.height()), self.index) How can i bound editor's size changes with TreeWidget? And one more thing, by default all items (cells) in TreeWidget have -1 or some big value as default width. I need whole text in cell to be visible, so how can i limit it only by visible range and expand it in height? Thank you in advance, Serge

    Read the article

  • Design Question on when to save

    - by Ben
    Hi, I was just after peoples opinion on when the best time to save an object (or collection of objects) is. I appreciate that it can be completely dependent on the situation that you are in but here is my situation. I have a collection of objects "MyCollection" in a grid. You can open each object "MyObject" in an editor dialogue by double clicking on the grid. Selecting "Cancel" on the dialogue will back out any changes you have made, but should selecting "ok" commit those changes back to the database, or should they commit the changes on that object back to the collection and have a save method that iterates through the collection and saves all changed objects? If i have an object "MyParentObject", that contains a collection of childen "MyChildObjectCollection", none of the changes made to each "MyChildObject" would be commited to the database until the "MyParentObject" was saved - this makes sense. However in my current situation, none of the objects in the collection are linked, therefore should the "Ok" on the dialogue commit the changes to the database? Appreciate any opinions on this. Thanks

    Read the article

  • How do I copy only the values and not the references from a Python list?

    - by wrongusername
    Specifically, I want to create a backup of a list, then make some changes to that list, append all the changes to a third list, but then reset the first list with the backup before making further changes, etc, until I'm finished making changes and want to copy back all the content in the third list to the first one. Unfortunately, it seems that whenever I make changes to the first list in another function, the backup gets changed also. Using original = backup didn't work too well; nor did using def setEqual(restore, backup): restore = [] for number in backup: restore.append(number) solve my problem; even though I successfully restored the list from the backup, the backup nevertheless changed whenever I changed the original list. How would I go about solving this problem?

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • acl.allow not working in mercurial

    - by sagar
    When I am trying to apply some authentication in .hg/hgrc file on Ubuntu machine its not working. I have added below code to hgrc file on Ubuntu [web] allow_push=* allow_read=* push_ssl =false [hooks] pretxnchangegroup.acl=python:hgext.acl.hook [acl.allow] /home/test/testrepository/*=myid When I am pushing some data from my Windows repository to testrepository on Ubuntu giving below message pushing to http://ubantuip:8000 searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files remote: error: pretxnchangegroup.acl hook failed: acl: access denied for changes et 69f00e372c67 remote: transaction abort! remote: rollback completed remote: abort: acl: access denied for changeset 69f00e372c67 why I am not able to push the changes?

    Read the article

  • hg unshelve not working

    - by shanebonham
    Our team is just getting started with Mercurial. One of the first things we've started to play with is hg shelve. Locally, I have no problem shelving changes. It all works perfectly from what I can tell. However, when I try to unshelve, I get the restoring backup files message, but when I run hg diff, there are no changes, and my changes are missing from the code. If i do hg unshelve -i I can see the diff, but again, trying to unshelve seems to have no effect. I've been trying to test it with some very simple changes that shouldn't be a problem in terms of conflicts, e.g. adding a test comment. I should note that I've tried hg unshelve -f after which it says unshelve completed but again, my changes are not restored. Any ideas what I am doing wrong? If it matters: Mercurial Distributed SCM (version 1.5.1+20100405)

    Read the article

  • Is it possible to place Joomla under revision control?

    - by Tom
    We are a team of many developers working on a website that uses both Joomla and custom PHP scripts. The problem is that there are multiple developers working on various features which need to update information in Joomla (adding modules, changing existing ones or changing settings) and when one developer changes something, he usually makes the change locally first and then does the same thing (hopefully) on the production server. Not only that this is very error-prone, but the developers often forget to tell other developers about the changes. The custom PHP scripts are easily shared between developers, but the changes in Joomla are often forgotten and they lead to serious conflicts when a developer tries to replicate his local changes in production. I have thought about placing Joomla in a Mercurial repository, but how could we distribute the changes in the database between the development, the testing and the production machines?

    Read the article

  • xcode storyboard reverting in compile

    - by darren
    I started having some very odd behaviour in Xcode 4.5 recently. I made a change to a UITableViewController with static cells but the changes did not appear in the simulator and neither did my code changes. I removed the app from the simulator and ran clean on the project, then started again and all the changes appeared. I made another code change, ran the debugger via simulator and once again I saw my old UITableViewController values and my code changes were absent. This project is using storyboards, but I am not sure if this problem is related to just storyboards given my code changes are reverted as well. I am deeply confused here. Not even clean fixed this issue. Any thoughts or suggestions?

    Read the article

  • jQuery display problem?

    - by SLAPme
    How do I hide the #changes-saved code using jQuery? For example let's say the code is displayed when the user clicks the submit button and then leaves the current web page and then returns back to the web page and the #changes-saved is no longer displayed until the submit button is clicked again. Here is the jQuery code. $(function() { $('#changes-saved').hide(); $(".save-button").click(function() { $.post($("#contact-form").attr("action"), $("#contact-form").serialize(), function(html) { $("div.contact-info-form").html(html); $('#changes-saved').append('<li>Changes saved!</li>').show().pause(1000).hide(); }); return false; // Prevent normal submit. }); });

    Read the article

  • Bitmap manipulation as a referenced object

    - by John Maloney
    I am working with a Bitmap object that I add annotations or copyright material to the image and then return the modified bitmap. My problem is that the Bitmap won't hold the changes made to it as it moves along the chain. I am treating it as a reference type and would assume that as it gets passed to each class it would carry the changes made to it. I have verified that the code inside of the MarkImage() classes does function, but which ever one gets called last is the one that has changes. Public Shared Function GetImage(ByVal imageDef As IImageDefinition) As Bitmap Dim image As Bitmap = Drawing.Image.FromFile(imageDef.Path) If Not imageDef.Authorization.IsAllowed(AuthKeys.CanViewWithoutCopyright) Then image = New MarkCopyrightImage(image).MarkImage() End If If Not imageDef.Authorization.IsAllowed(AuthKeys.CanViewWithoutAnnotations) Then image = New MarkAnnotateImage(image).MarkImage() End If Return image End Function How do I write changes to a bitmap object and then pass that object to another class while maintaining the changes? Answers in C# or VB are fine? Thanks

    Read the article

  • JQuery code problem?

    - by SLAPme
    New to JQuery, I added the following JQuery code below and moved it around in my code and now it won't work I forgot what I did, can someone fix my code by placing the below code in its correct place thanks. $('a').click(function () { $('#changes-saved').remove(); }); return false; // prevent normal submit }); JQuery code. $(function() { $('#changes-saved').hide(); $('.save-button').click(function() { $.post($('#contact-form').attr('action'), $('#contact-form').serialize(), function(html) { $('div.contact-info-form').html(html); $('#changes-saved').append('Changes saved!').show().pause(1000).hide(); }); return false; // prevent normal submit }); $('a').click(function () { $('#changes-saved').remove(); }); return false; // prevent normal submit }); });

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >