Search Results

Search found 14602 results on 585 pages for 'objected oriented design'.

Page 565/585 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • Is there really a need for encryption to have true wireless security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. That's the question, but I think it may need more elaborating on. If you don't think so, please feel free to skip the next (long) part. Elaborating more on the two aspects ... I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. But the cabling without encryption is also insecure. If someone plugin all the data is right there. So, while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. It's wasting resources for too little gain. I believe we should encrypt only sensitive data regardless of wires. That's already done with HTTPS, so I don't really need to encrypt my torrents, for instance. They're torrents, they are meant to be freely shared! As for using passwords, they should be added to the users, always. Not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where we actually use passwords on our OS users, so let's go with that in mind. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. But it's lacking the second aspect. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think?

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • How can I tell Firefox to ignore unprintable characters?

    - by BrianH
    Edit: Summary Apparently the intended character to display in this case is an "en-dash". This page has a table half way down that shows that for the &ndash;, some software will convert the correct hex code of 2013 to 0096. (look at the first row in the table). This answer on Stackoverflow explains that somehow this is a mixup between Windows-1252 and UTF-8 This blog article enforces this: Character 150 (0x96) is the unicode character "START OF GUARDED AREA" in the non-displayed C1 control character range, but in the Windows-1252 encoding it's mapped to to the displayable character 0x2013 "en-dash" (a short dash). Others have struggled with this when producing content, as this answer on Stackoverflow shows how to replace 0x0096 with 0x2013. Google must realize this, because as stated in my original question below, Google's cached version of the Amazon page has &ndash; so it seems they are automatically correcting these mistakes on pages they cache. I have tried setting my encoding to Windows-1252 but that does not help. So now I guess my question is, how can I tell Firefox to ignore unprintable characters like these? Original content below: (Firefox 3.6.13 on Windows XP) Every once in a while I notice an odd character on certain web pages when browsing the web. It is a outline of a box with a 4-digit number inside. And example of a page that has these characters is: http://aws.amazon.com/ec2/#highlights After each section heading (Elastic, Completely Controlled, ...) I see a box with the number "0096" inside. I looked at the cached version on Google, and google has &ndash; in it's place, so I'm guessing I should be seeing a dash there instead of the box with the numbers in it. I have tried changing the character encoding in Firefox but haven't been able to find one that shows these characters correctly. Is there a way to allow Firefox to view these characters? Thanks in advance! Edit - adding a screen shot of the "special" characters: Edit #2 - tried in Ubuntu - new screenshots I logged into my Ubuntu desktop and browsed to the amazon page in Chrome and Firefox. Chrome completely ignores character, even if I inspect or view page source. Firefox in Unbutu displays the character exactly like Firefox on my Windows XP box. I copied the character and played around with it at the command line - here is a screenshot of the results: It looks like I can paste the character into this post as well: `` It is definitely not isolated to Windows XP. I tried setting the character encoding for my terminal to Windows 1252 (from Dennis' comment below), but then it just displays this character as a question mark. I pulled the webpage down with wget and with curl, and both outputs show this characters as: <96> It makes me wonder if this character renders correctly for anyone? It appears webkit just ignores it, my IE6 ignores it, Firefox displays the box with the numbers in it. I would have to imagine the design team at Amazon can see it correctly? It's not a huge deal to get these characters displaying correctly, but it would be nice to know if there is a solution to this.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • Application Does Not Start in Windows 7

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fresh from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried running sfc /scannow from the command line (with administrator privileges), shutting down the Spybot Resident, and disabling the firewall and virus scanner. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Faulting application name: SSDFresh.exe, version: 1.0.0.0, time stamp: 0x4f2a45d8 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x000007ff0016dbba Faulting process id: 0x994 Faulting application start time: 0x01cd11fd9fe978df Faulting application path: C:\Program Files (x86)\SSD Fresh\SSDFresh.exe Faulting module path: unknown Report Id: dfeed551-7df0-11e1-a2c7-002522c47ec0 Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Application: SSDFresh.exe Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.NullReferenceException Stack: at AbBugReporter.BugForm.InitLanguage() at AbBugReporter.BugForm..ctor(AbFlexTrans.LanguageInfo, AbBugReporter.BugReportManager, Boolean) at AbBugReporter.BugReportManager.Show(System.Exception) at SSDFresh.App.App_DispatcherUnhandledException(System.Object, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs) at System.Windows.Threading.Dispatcher.CatchException(System.Exception) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.WrappedInvoke(System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.InvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr) at MS.Win32.UnsafeNativeMethods.DispatchMessage(System.Windows.Interop.MSG ByRef) at System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) at System.Windows.Application.RunInternal(System.Windows.Window) at System.Windows.Application.Run() at SSDFresh.App.Main() Error 4/2/2012 11:35:39 PM SideBySide 59 None Activation context generation failed for "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe".Error in manifest or policy file "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe.Config" on line 0. Invalid Xml syntax. Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • How to stop Apache from crashing my entire server?

    - by CyberShadow
    I maintain a Gentoo server with a few services, including Apache. It's fairly low-end (2GB of RAM and a low-end CPU with 2 cores). My problem is that, despite my best efforts, an over-loaded Apache crashes the entire server. In fact, at this point I'm close to being convinced that Linux is a horrible operating system that isn't worth anyone's time looking for stability under load. Things I tried: Adjusting oom_adj for the root Apache process (and thus all its children). That had close to no effect. When Apache was overloaded it would bring the system to a grind, as the system paged out everything else before it got to kill anything. Turning off swap. Didn't help, it would unload memory paged to binaries of processes and other files on /, thus causing the same effect. Putting it in a memory-limited cgroup (limited to 512 MB of RAM, 1/4th of the total). This "worked", at least in my own stress tests - except the server keeps crashing under load (basically stalling all other processes, inaccessible via SSH, etc.) Running it with idle I/O priority. This wasn't a very good idea in the end, because it just caused the system load to climb indefinitely (into the thousands) with almost no visible effect - until you tried to access an unbuffered part of the disk. This caused the task to freeze. (So much for good I/O scheduling, eh?) Limiting the number of concurrent connections to Apache. Setting the number too low caused web sites to become unresponsive due to most slots being occupied with long requests (file downloads). I tried various Apache MPMs without much success (prefork, event, itk). Switching from prefork/event+php-cgi+suphp to itk+mod_php. This improved performance, but didn't solve the actual problem. Switching I/O schedulers (cfq to deadline). Just to stress this out: I don't care if Apache itself goes down under load, I just want the rest of my system to remain stable. Of course, having Apache recover quickly after a brief period of intensive load would be great to have, but one step at a time. Right now I am mostly dumbfounded by how can humanity, in this day and age, design an operating system where such a seemingly simple task (don't allow one system component to crash the entire system) seems practically impossible - or at least, very hard to do. Please don't suggest things like VMs or "BUY MORE RAM". Some more information gathered with a friend's help: The processes hang when the cgroup oom killer is invoked. Here's the call trace: [<ffffffff8104b94b>] ? prepare_to_wait+0x70/0x7b [<ffffffff810a9c73>] mem_cgroup_handle_oom+0xdf/0x180 [<ffffffff810a9559>] ? memcg_oom_wake_function+0x0/0x6d [<ffffffff810aa041>] __mem_cgroup_try_charge+0x32d/0x478 [<ffffffff810aac67>] mem_cgroup_charge_common+0x48/0x73 [<ffffffff81081c98>] ? __lru_cache_add+0x60/0x62 [<ffffffff810aadc3>] mem_cgroup_newpage_charge+0x3b/0x4a [<ffffffff8108ec38>] handle_mm_fault+0x305/0x8cf [<ffffffff813c6276>] ? schedule+0x6ae/0x6fb [<ffffffff8101f568>] do_page_fault+0x214/0x22b [<ffffffff813c7e1f>] page_fault+0x1f/0x30 At this point, the apache memory cgroup is practically deadlocked, and burning CPU in syscalls (all with the above call trace). This seems like a problem in the cgroup implementation...

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Open-source and client-side IRC-Client

    - by user125197
    I'm administrating a homepage with an IRC-Webclient. At the moment, it uses a Java-client, modified to be SSL-compatible and compatible with whatever design I want to use. The usual PJIRC thing, with users wishes and concerns implemented (I'll give you SSL-PJIRC if you want - no problem. The login-script will be improved for IE6-support this weekend, means write the object tag if an old IE is found, that's it. Rest runs perfectly well). But still, the better user experience would be a client that only requires the user to enable JavaScript. So, before I go into rewriting and customizing a complete Java-IRC-solution, I'd like to ask some other people - after researching for half a year of time. The requirements are: a) Free hosting, no cgi-bin is acceptable. A lot of hosters also have CGI supported, but it doesn't allow IRC-access. So, seems you are limited to x10-hosting for CGI:IRC. b) Solution must be open source (not free as in free beer, but free as in free speech - I want to be able to read every line of the code). c) Hosting cannot be limited to Windows-hosting. Free operation systems (anything with Linux, BSD, whatever - I think you get what I mean) must be possible. d) No server-side technology. I'm limited to everything that runs client-side only. (Well, a Java-Applet does...). d) Solution must support SSL. e) The side must be able to move. Means, you register to a new free hoster, load your things up via FileZilla and the thing simply runs. No server-concerning implementations needed. f) Old browsers must be supported. From all my research, there are two ways a) Java-Applet b) Use HTML5-Websockets (it is impossible to use the JavaScript-library I found, as it depends on ActiveX - means, Windows hosting). b) means "you are going to enter very unstable content". I worked with HTML5 for month, and, well ... Though, HTML5 also is not suitable for older browsers (old browser support is an absolutely necessary requirement). PHP by the way is a server side solution ... so no line of PHP. My idea now was a Java Applet, and a fallback which again is embedded and proprietary, but only requieres JavaScript (wsirc or Mibbit are great here, whilst I prefer wsirc ... and zooming fonts that are plainly to small, but worse, cannot read the code). So my question is - with free hosting, not installation on server, plain upload - do you see and open source, complelety client-side, ssl-compatible way to use or write a client? This wouldn't be my first programming project, I'm not afraid of the code. If there was a way and I had to write it, even from scratch, I'd do. From all I know, there sadly is none. But maybe some experienced admin has an idea? Best Wishes, yetanotheruser Sry my English isn't perfect. It's enough for programming at least.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Add keyboard languages to XP, Vista, and Windows 7

    - by Matthew Guay
    Do you regularly need to type in multiple languages in Windows?  Here we’ll show you the easy way to add and change input languages to your keyboard in XP, Vista, and Windows 7. Windows Vista and 7 come preinstalled with support for viewing a wide variety of languages, so adding an input language is fairly simply.  Adding an input language is slightly more difficult in XP, and requires installing additional files if you need an Asian or Complex script language.  First we show how to add an input language in Windows Vista and 7; it’s basically the same in both versions.  Then, we show how to add a language to XP, and also how to add Complex Script support.  Please note that this is only for adding an input language, which will allow you to type in the language you select.  This does not change your user interface language. Change keyboard language in Windows 7 and Vista It is fairly simple to add or change a keyboard language in Windows 7 or Vista.  In Windows 7, enter “keyboard language” in the Start menu search box, and select “Change keyboards or other input methods”. In Windows Vista, open Control Panel and enter “input language” in the search box and select “Change keyboards or other input methods”.  This also works in Windows 7. Now, click Change Keyboards to add another keyboard language or change your default one. Our default input language is US English, and our default keyboard is the US keyboard layout.  Click Add to insert another input language while still leaving your default input language installed. Here we selected the standard Thai keyboard language (Thai Kedmanee), but you can select any language you want.  Windows offers almost any language you can imagine, so just look for the language you want, select it, and click Ok. Alternately, if you want, you can click Preview to see your layout choice before accepting it.  This is only the default characters, not ones that will be activated with Shift or other keys (many Asian languages use many more characters than English, and require the use of Shift and other keys to access them all).  Once your finished previewing, click close and then press Ok on the previous dialog. Now you will see both of your keyboard languages in the Installed services box.  You can click Add to go back and get more, or move your selected language up or down (to change its priority), or simply click Apply to add the new language. Also, you can now change the default input language from the top menu.  This is the language that your keyboard will start with when you boot your computer.  So, if you mainly use English but also use another language, usually it is best to leave English as your default input language. Once you’ve pressed Apply or Ok, you will see a new icon beside your system tray with the initials of your default input language. If you click it, you can switch between input languages.  Alternately you can switch input languages by pressing Alt+Shift on your keyboard. Some complex languages, such as Chinese, may have extra buttons to change input modes to accommodate their large alphabet. If you would like to change the keyboard shortcut for changing languages, go back to the Input Languages dialog, and select the “Advanced Key Settings” tab.  Here you can change settings for Caps Lock and change or add key sequences to change between languages. Also, the On-Screen keyboard will display the correct keyboard language (here the keyboard is displaying Thai), which can be a helpful reference if your physical keyboard doesn’t have your preferred input language printed on it.  To open this, simply enter “On-Screen keyboard” in the start menu search, or click All Programs>Accessories>On-Screen keyboard. Change keyboard language in Windows XP The process for changing the keyboard language in Windows XP is slightly different.  Open Control Panel, and select “Date, Time, Language, and Regional Options”.   Select “Add other languages”. Now, click Details to add another language.  XP does not include support for Asian and complex languages by default, so if you need to add one of those languages we have details for that below. Click Add to add an input language. Select your desired language from the list, and choose your desired keyboard layout if your language offers multiple layouts.  Here we selected Canadian French with the default layout. Now you will see both of your keyboard languages in the Installed services box.  You can click Add to go back and add more, or move your selected language up or down (to change its priority), or simply click Apply to add the new language. Once you’ve pressed Apply or Ok, you will see a new icon beside your system tray with the initials of your default input language. If you click it, you can switch between input languages.  Alternately you can switch input languages by pressing Alt+Shift on your keyboard. If you would like to change the keyboard shortcut for changing languages, go back to the Input Languages dialog, and click the “Key Settings” button on the bottom of the dialog.  Here you can change settings for Caps Lock and change or add key sequences to change between languages. Add support to XP for Asian and Complex script languages Windows XP does not include support for Asian and Complex script languages by default, but you can easily add them to your computer.  This is useful if you wish to type in one of these languages, or simply want to read text written in these languages, since XP will not display these languages correctly if they are not installed.  If you wish to install Chinese, Japanese, and/or Korean, check the “Install files for East Asian languages” box.  Or, if you need to install a complex script language (including Arabic, Armenian, Georgian, Hebrew, the Indic languages, Thai, and Vietnamese), check the “Install files for complex script and right-to-left languages” box.   Choosing either of these options will open a prompt reminding you that this option will take up more disk space.  Support for complex languages will require around 10Mb of hard drive space, but East Asian language support may require 230 Mb or more free disk space.  Click Ok, and click apply to install your language files. You may have to insert your XP CD into your CD drive to install these files.  Insert the disk, and then click Ok. Windows will automatically copy the files, including fonts for these languages… …and then will ask you to reboot your computer to finalize the settings.  Click Yes, and then reopen the “Add other languages” dialog when your computer is rebooted, and add a language as before.     Now you can add Complex and/or Asian languages to XP, just as above.  Here is the XP taskbar language selector with Thai installed. Conclusion Unfortunately we haven’t found a way to add Asian and complex languages in XP without having an XP disc. If you know of a way, let us know in the comments. (No downloading the XP disc from torrent site answers please) Adding an input language is very important for bilingual individuals, and can also be useful if you simply need to occasionally view Asian or Complex languages in XP.  And by following the correct instructions for your version of Windows, it should be very easy to add, change, and remove input languages. Similar Articles Productive Geek Tips Show Keyboard Shortcut Access Keys in Windows VistaKeyboard Ninja: 21 Keyboard Shortcut ArticlesAnother Desktop Cube for Windows XP/VistaThe "Up" Keyboard Shortcut for Windows 7 or Vista ExplorerWhat is ctfmon.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • Working with Silverlight DataGrid RowDetailsTemplate

    - by mohanbrij
    In this post I am going to show how we can use the Silverlight DataGrid RowDetails Template, Before I start I assume that you know basics of Silverlight and also know how you create a Silverlight Projects. I have started with the Silverlight Application, and kept all the default options before I created a Silverlight Project. After this I added a Silverlight DataGrid control to my MainForm.xaml page, using the DragDrop feature of Visual Studio IDE, this will help me to add the default namespace and references automatically. Just to give you a quick look of what exactly I am going to do, I will show you in the screen below my final target, before I start explaining rest of my codes. Before I start with the real code, first I have to do some ground work, as I am not getting the data from the DB, so I am creating a class where I will populate the dummy data. EmployeeData.cs public class EmployeeData { public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public EmployeeData() { } public List<EmployeeData> GetEmployeeData() { List<EmployeeData> employees = new List<EmployeeData>(); employees.Add ( new EmployeeData { Address = "#407, PH1, Foyer Appartment", City = "Bangalore", Country = "India", FirstName = "Brij", LastName = "Mohan", State = "Karnataka" }); employees.Add ( new EmployeeData { Address = "#332, Dayal Niketan", City = "Jamshedpur", Country = "India", FirstName = "Arun", LastName = "Dayal", State = "Jharkhand" }); employees.Add ( new EmployeeData { Address = "#77, MSR Nagar", City = "Bangalore", Country = "India", FirstName = "Sunita", LastName = "Mohan", State = "Karnataka" }); return employees; } } The above class will give me some sample data, I think this will be good enough to start with the actual code. now I am giving below the XAML code from my MainForm.xaml First I will put the Silverlight DataGrid, <data:DataGrid x:Name="gridEmployee" CanUserReorderColumns="False" CanUserSortColumns="False" RowDetailsVisibilityMode="VisibleWhenSelected" HorizontalAlignment="Center" ScrollViewer.VerticalScrollBarVisibility="Auto" Height="200" AutoGenerateColumns="False" Width="350" VerticalAlignment="Center"> Here, the most important property which I am going to set is RowDetailsVisibilityMode="VisibleWhenSelected" This will display the RowDetails only when we select the desired Row. Other option we have in this is Collapsed and Visible. Which will either make the row details always Visible or Always Collapsed. but to get the real effect I have selected VisibleWhenSelected. Now I am going to put the rest of my XAML code. <data:DataGrid.Columns> <!--Begin FirstName Column--> <data:DataGridTextColumn Width="150" Header="First Name" Binding="{Binding FirstName}"/> <!--End FirstName Column--> <!--Begin LastName Column--> <data:DataGridTextColumn Width="150" Header="Last Name" Binding="{Binding LastName}"/> <!--End LastName Column--> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <!-- Begin row details section. --> <DataTemplate> <Border BorderBrush="Black" BorderThickness="1" Background="White"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="0.2*" /> <ColumnDefinition Width="0.8*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <!-- Controls are bound to FullAddress properties. --> <TextBlock Text="Address : " Grid.Column="0" Grid.Row="0" /> <TextBlock Text="{Binding Address}" Grid.Column="1" Grid.Row="0" /> <TextBlock Text="City : " Grid.Column="0" Grid.Row="1" /> <TextBlock Text="{Binding City}" Grid.Column="1" Grid.Row="1" /> <TextBlock Text="State : " Grid.Column="0" Grid.Row="2" /> <TextBlock Text="{Binding State}" Grid.Column="1" Grid.Row="2" /> <TextBlock Text="Country : " Grid.Column="0" Grid.Row="3" /> <TextBlock Text="{Binding Country}" Grid.Column="1" Grid.Row="3" /> </Grid> </Border> </DataTemplate> <!-- End row details section. --> </data:DataGrid.RowDetailsTemplate>   In the code above, first I am declaring the simple dataGridTextColumn for FirstName and LastName, and after this I am creating the RowDetailTemplate, where we are just putting the code what we usually do to design the Grid. I mean nothing very much RowDetailTemplate Specific, most of the code which you will see inside the RowDetailsTemplate is plain and simple, where I am binding rest of the Address Column. And that,s it. Once we will bind the DataGrid, you are ready to go. In the code below from MainForm.xaml.cs, I am just binding the DataGrid public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); BindControls(); } private void BindControls() { EmployeeData employees = new EmployeeData(); gridEmployee.ItemsSource = employees.GetEmployeeData(); } } Once you will run, you can see the output I have given in the screenshot above. In this example I have just shown the very basic example, now it up to your creativity and requirement, you can put some other controls like checkbox, Images, even other DataGrid, etc inside this RowDetailsTemplate column. I am attaching my sample source code with this post. I have used Silverlight 3 and Visual Studio 2008, but this is fully compatible with you Silverlight 4 and Visual Studio 2010. you may just need to Upgrade the attached Sample. You can download from here.

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Behind ASP.NET MVC Mock Objects

    - by imran_ku07
       Introduction:           I think this sentence now become very familiar to ASP.NET MVC developers that "ASP.NET MVC is designed with testability in mind". But what ASP.NET MVC team did for making applications build with ASP.NET MVC become easily testable? Understanding this is also very important because it gives you some help when designing custom classes. So in this article i will discuss some abstract classes provided by ASP.NET MVC team for the various ASP.NET intrinsic objects, including HttpContext, HttpRequest, and HttpResponse for making these objects as testable. I will also discuss that why it is hard and difficult to test ASP.NET Web Forms.      Description:           Starting from Classic ASP to ASP.NET MVC, ASP.NET Intrinsic objects is extensively used in all form of web application. They provide information about Request, Response, Server, Application and so on. But ASP.NET MVC uses these intrinsic objects in some abstract manner. The reason for this abstraction is to make your application testable. So let see the abstraction.           As we know that ASP.NET MVC uses the same runtime engine as ASP.NET Web Form uses, therefore the first receiver of the request after IIS and aspnet_filter.dll is aspnet_isapi.dll. This will start the application domain. With the application domain up and running, ASP.NET does some initialization and after some initialization it will call Application_Start if it is defined. Then the normal HTTP pipeline event handlers will be executed including both HTTP Modules and global.asax event handlers. One of the HTTP Module is registered by ASP.NET MVC is UrlRoutingModule. The purpose of this module is to match a route defined in global.asax. Every matched route must have IRouteHandler. In default case this is MvcRouteHandler which is responsible for determining the HTTP Handler which returns MvcHandler (which is derived from IHttpHandler). In simple words, Route has MvcRouteHandler which returns MvcHandler which is the IHttpHandler of current request. In between HTTP pipeline events the handler of ASP.NET MVC, MvcHandler.ProcessRequest will be executed and shown as given below,          void IHttpHandler.ProcessRequest(HttpContext context)          {                    this.ProcessRequest(context);          }          protected virtual void ProcessRequest(HttpContext context)          {                    // HttpContextWrapper inherits from HttpContextBase                    HttpContextBase ctxBase = new HttpContextWrapper(context);                    this.ProcessRequest(ctxBase);          }          protected internal virtual void ProcessRequest(HttpContextBase ctxBase)          {                    . . .          }             HttpContextBase is the base class. HttpContextWrapper inherits from HttpContextBase, which is the parent class that include information about a single HTTP request. This is what ASP.NET MVC team did, just wrap old instrinsic HttpContext into HttpContextWrapper object and provide opportunity for other framework to provide their own implementation of HttpContextBase. For example           public class MockHttpContext : HttpContextBase          {                    . . .          }                     As you can see, it is very easy to create your own HttpContext. That's what did the third party mock frameworks like TypeMock, Moq, RhinoMocks, or NMock2 to provide their own implementation of ASP.NET instrinsic objects classes.           The key point to note here is the types of ASP.NET instrinsic objects. In ASP.NET Web Form and ASP.NET MVC. For example in ASP.NET Web Form the type of Request object is HttpRequest (which is sealed) and in ASP.NET MVC the type of Request object is HttpRequestBase. This is one of the reason that makes test in ASP.NET WebForm is difficult. because their is no base class and the HttpRequest class is sealed, therefore it cannot act as a base class to others. On the other side ASP.NET MVC always uses a base class to give a chance to third parties and unit test frameworks to create thier own implementation ASP.NET instrinsic object.           Therefore we can say that in ASP.NET MVC, instrinsic objects are of type base classes (for example HttpContextBase) .Actually these base classes had it's own implementation of same interface as the intrinsic objects it abstracts. It includes only virtual members which simply throws an exception. ASP.NET MVC also provides the corresponding wrapper classes (for example, HttpRequestWrapper) which provides a concrete implementation of the base classes in the form of ASP.NET intrinsic object. Other wrapper classes may be defined by third parties in the form of a mock object for testing purpose.           So we can say that a Request object in ASP.NET MVC may be HttpRequestWrapper or may be MockRequestWrapper(assuming that MockRequestWrapper class is used for testing purpose). Here is list of ASP.NET instrinsic and their implementation in ASP.NET MVC in the form of base and wrapper classes. Base Class Wrapper Class ASP.NET Intrinsic Object Description HttpApplicationStateBase HttpApplicationStateWrapper Application HttpApplicationStateBase abstracts the intrinsic Application object HttpBrowserCapabilitiesBase HttpBrowserCapabilitiesWrapper HttpBrowserCapabilities HttpBrowserCapabilitiesBase abstracts the HttpBrowserCapabilities class HttpCachePolicyBase HttpCachePolicyWrapper HttpCachePolicy HttpCachePolicyBase abstracts the HttpCachePolicy class HttpContextBase HttpContextWrapper HttpContext HttpContextBase abstracts the intrinsic HttpContext object HttpFileCollectionBase HttpFileCollectionWrapper HttpFileCollection HttpFileCollectionBase abstracts the HttpFileCollection class HttpPostedFileBase HttpPostedFileWrapper HttpPostedFile HttpPostedFileBase abstracts the HttpPostedFile class HttpRequestBase HttpRequestWrapper Request HttpRequestBase abstracts the intrinsic Request object HttpResponseBase HttpResponseWrapper Response HttpResponseBase abstracts the intrinsic Response object HttpServerUtilityBase HttpServerUtilityWrapper Server HttpServerUtilityBase abstracts the intrinsic Server object HttpSessionStateBase HttpSessionStateWrapper Session HttpSessionStateBase abstracts the intrinsic Session object HttpStaticObjectsCollectionBase HttpStaticObjectsCollectionWrapper HttpStaticObjectsCollection HttpStaticObjectsCollectionBase abstracts the HttpStaticObjectsCollection class      Summary:           ASP.NET MVC provides a set of abstract classes for ASP.NET instrinsic objects in the form of base classes, allowing someone to create their own implementation. In addition, ASP.NET MVC also provide set of concrete classes in the form of wrapper classes. This design really makes application easier to test and even application may replace concrete implementation with thier own implementation, which makes ASP.NET MVC very flexable.

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >