Search Results

Search found 2299 results on 92 pages for 'hyper threading'.

Page 87/92 | < Previous Page | 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    I am having a problem with an OSQA site. It is Django/Apache/mod_wsgi configured site. Every hour, the CPU spikes to 164% (Average) for task HTTPD. After 10 minutes, it frees back up. I have reviewed the logs, cron tables, made many config changes, but cannot track this problem down. Can someone please look at the information below and let me know if it is a configuration problem, or if anyone else has experienced this issue. Running TOP shows HTTPD using 165% of CPU VMware performance monitor also displays spikes. This happens every hour for 10 minutes. I have the following information from server status Server Version: Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 Server Built: Feb 7 2012 09:50:15 Current Time: Sunday, 10-Jun-2012 21:44:29 EDT Restart Time: Sunday, 10-Jun-2012 19:44:51 EDT Parent Server Generation: 0 Server uptime: 1 hour 59 minutes 37 seconds Total accesses: 1088 - Total Traffic: 11.5 MB CPU Usage: u80.26 s243.8 cu0 cs0 - 4.52% CPU load .152 requests/sec - 1682 B/second - 10.8 kB/request 4 requests currently being processed, 11 idle workers ....._..........__......W....................................... ...................................C._..._....._L__._L_._....... ...................... Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request 0-0 - 0/0/34 . 0.42 327 17 0.0 0.00 0.67 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 1-0 - 0/0/22 . 0.31 339 32 0.0 0.00 0.26 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 2-0 - 0/0/22 . 0.65 358 10 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 3-0 - 0/0/31 . 1.03 378 31 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 4-0 - 0/0/20 . 0.45 356 9 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 5-0 18852 0/16/34 _ 0.98 27 18120 0.0 0.37 0.62 69.180.250.36 osqa.informs.org GET /questions/289/what-is-the-difference-between-operations-re 6-0 - 0/0/32 . 0.94 309 29 0.0 0.00 0.64 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 7-0 - 0/0/31 . 1.15 382 32 0.0 0.00 0.75 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 8-0 - 0/0/21 . 0.28 403 19 0.0 0.00 0.20 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 9-0 - 0/0/32 . 1.37 288 16 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 10-0 - 0/0/33 . 1.72 383 16 0.0 0.00 0.40 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 I am running Django 1.3 This is a mod_wsgi configuration and copied is the wsgi.conf file: <IfModule !python_module> <IfModule !wsgi_module> LoadModule wsgi_module modules/mod_wsgi.so <IfModule wsgi_module> <Directory /var/www/osqa> Order allow,deny Allow from all #Deny from all </Directory> WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess OSQA maximum-requests=10000 WSGIProcessGroup OSQA Alias /admin_media/ /usr/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/contrib/admin/media/ Alias /m/ /var/www/osqa/forum/skins/ Alias /upfiles/ /var/www/osqa/forum/upfiles/ <Directory /var/www/osqa/forum/skins> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /var/www/osqa/osqa.wsgi </IfModule> </IfModule> </IfModule> This is the httpd.conf file Timeout 120 KeepAlive Off MaxKeepAliveRequests 100 MaxKeepAliveRequests 400 KeepAliveTimeout 3 <IfModule prefork.c> Startservers 15 MinSpareServers 10 MaxSpareServers 20 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 0 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> We are using MySQL The server is an ESX4i, configured for the VM to use 4 CPUs and 8 GB Ram. Hyper threading is enabled, 2 physical CPU's, with 4 Logical. the CPU are Intel Xeon 2.8 GHz. Total memory is 12GB

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • Changing an HTML Form's Target with jQuery

    - by Rick Strahl
    This is a question that comes up quite frequently: I have a form with several submit or link buttons and one or more of the buttons needs to open a new Window. How do I get several buttons to all post to the right window? If you're building ASP.NET forms you probably know that by default the Web Forms engine sends button clicks back to the server as a POST operation. A server form has a <form> tag which expands to this: <form method="post" action="default.aspx" id="form1"> Now you CAN change the target of the form and point it to a different window or frame, but the problem with that is that it still affects ALL submissions of the current form. If you multiple buttons/links and they need to go to different target windows/frames you can't do it easily through the <form runat="server"> tag. Although this discussion uses ASP.NET WebForms as an example, realistically this is a general HTML problem although likely more common in WebForms due to the single form metaphor it uses. In ASP.NET MVC for example you'd have more options by breaking out each button into separate forms with its own distinct target tag. However, even with that option it's not always possible to break up forms - for example if multiple targets are required but all targets require the same form data to the be posted. A common scenario here is that you might have a button (or link) that you click where you still want some server code to fire but at the end of the request you actually want to display the content in a new window. A common operation where this happens is report generation: You click a button and the server generates a report say in PDF format and you then want to display the PDF result in a new window without killing the content in the current window. Assuming you have other buttons on the same Page that need to post to base window how do you get the button click to go to a new window? Can't  you just use a LinkButton or other Link Control? At first glance you might think an easy way to do this is to use an ASP.NET LinkButton to do this - after all a LinkButton creates a hyper link that CAN accept a target and it also posts back to the server, right? However, there's no Target property, although you can set the target HTML attribute easily enough. Code like this looks reasonable: <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" target="_blank" OnClick="bnNewTarget_Click" /> But if you try this you'll find that it doesn't work. Why? Because ASP.NET creates postbacks with JavaScript code that operates on the current window/frame: <a id="btnNewTarget" target="_blank" href="javascript:__doPostBack(&#39;btnNewTarget&#39;,&#39;&#39;)">New Target</a> What happens with a target tag is that before the JavaScript actually executes a new window is opened and the focus shifts to the new window. The new window of course is empty and has no __doPostBack() function nor access to the old document. So when you click the link a new window opens but the window remains blank without content - no server postback actually occurs. Natch that idea. Setting the Form Target for a Button Control or LinkButton So, in order to send Postback link controls and buttons to another window/frame, both require that the target of the form gets changed dynamically when the button or link is clicked. Luckily this is rather easy to do however using a little bit of script code and jQuery. Imagine you have two buttons like this that should go to another window: <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" OnClick="ClickHandler" /> <asp:Button runat="server" ID="btnButtonNewTarget" Text="New Target Button" OnClick="ClickHandler" /> ClickHandler in this case is any routine that generates the output you want to display in the new window. Generally this output will not come from the current page markup but is generated externally - like a PDF report or some report generated by another application component or tool. The output generally will be either generated by hand or something that was generated to disk to be displayed with Response.Redirect() or Response.TransmitFile() etc. Here's the dummy handler that just generates some HTML by hand and displays it: protected void ClickHandler(object sender, EventArgs e) { // Perform some operation that generates HTML or Redirects somewhere else Response.Write("Some custom output would be generated here (PDF, non-Page HTML etc.)"); // Make sure this response doesn't display the page content // Call Response.End() or Response.Redirect() Response.End(); } To route this oh so sophisticated output to an alternate window for both the LinkButton and Button Controls, you can use the following simple script code: <script type="text/javascript"> $("#btnButtonNewTarget,#btnNewTarget").click(function () { $("form").attr("target", "_blank"); }); </script> So why does this work where the target attribute did not? The difference here is that the script fires BEFORE the target is changed to the new window. When you put a target attribute on a link or form the target is changed as the very first thing before the link actually executes. IOW, the link literally executes in the new window when it's done this way. By attaching a click handler, though we're not navigating yet so all the operations the script code performs (ie. __doPostBack()) and the collection of Form variables to post to the server all occurs in the current page. By changing the target from within script code the target change fires as part of the form submission process which means it runs in the correct context of the current page. IOW - the input for the POST is from the current page, but the output is routed to a new window/frame. Just what we want in this scenario. Voila you can dynamically route output to the appropriate window.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  HTML  jQuery  

    Read the article

  • CodePlex Daily Summary for Wednesday, May 19, 2010

    CodePlex Daily Summary for Wednesday, May 19, 2010New Projects3FD - Framework For Fast Development: This is a C++ framework that provides a solid error handling structure, garbage collection, multi-threading and portability between compilers. The ...ali test project: test projectAttribute Builder: The Attribute Builder builds an attribute from a lambda expression because it can.BDK0008: it is a food lovers websitecgdigest: cg digest template for non-profit orgCokmez: Bilmuh cokmez duyuru sistemiDot Game: It is a dot game that our Bangladeshi people used to play at their childhood time and their last time when they are poor for working.ESRI Javascript .NET Integration: Visual Studio project that shows how to integrate the Esri Javascript API with .NET Exchange 2010 RBAC Editor (RBAC GUI): Exchange 2010 RBAC Editor (RBAC GUI) Developed in C# and using Powershell behind the scenes RBAC tool to simplfy RBAC administrationFile Validator (Validador de Archivos): Componente que permite realizar la validación de archivos (txt, imagenes, PDF, etc) actualmente solo tiene implementado la parte de los txt, permit...Grip 09 Lab4: GripjPageFlipper: This is a wonderful implementation of page flipper entirely based on HTML 5 <canvas> tag. It means that it can work in any browser that supports HT...Main project: Index bird families and associated species. Malware Analysis and Can Handler: MACH is a tool to organize and catalog your malware analysis canned responses, and to track the topic response lifecycle for forum experts.Perf Web: Performance team web sitePiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统。 PiPiBugNet代码基于ASP.NET 2.0平台开发,编程语言为C#。 PiPiBugNet界面基于Ext JS设计,提供了极佳的用户体验。RemoteDesktop: integrated remote console, desktop and chat utilityRuneScape emulation done right.: RuneScape emulator.Sandkasten: SandkastenSilverlight Metro Theme: Metro Theme for Silverlight.Silverlight Stereoscopy: Stereoscopy with Silverlight.Twitivia: Twitivia is an online trivia service that runs through twitter and is being used as an example set of projects. C#, MVC, Windows Services, Linq ...XPool: A simple school project.New ReleasesDot Game: 'Dot Game' first release: Dot Game first release This is the 'Dot Game' first release.DotNetNuke® Store: 02.01.35: What's New in this release? Bugs corrected: - Fixed a resource for the header in the Category list of the Store Admin module. - Added several test...ESRI Javascript .NET Integration: Map search results in a DataView: Visual Studio 2010 example showing how to pass Map results back to ASP.NET for use in a DataView.Exchange 2010 RBAC Editor (RBAC GUI): RBAC Editor: This binary is still beta (0.0.9.1) but in most case it's very stableExtending C# editor - Outlining, classification: first revision: a couple of bug has been eliminated, performance improvementFloe IRC Client: Floe IRC Client 2010-05 R6: Corrected bug where text would be unexpectedly copied to the clipboard.Floe IRC Client: Floe IRC Client 2010-05 R7: - Fixed bug where text would show up in a query window with someone if they said something on a channel that you are both present on.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.9 GA released: Hi, Today we have released the final version of Visifire v3.0.9 which contains the following enhancements: * Two new properties ActualAxisMin...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.2 GA Released: Hi, Today we have released the final version of Visifire v3.5.2 which contains the following enhancements: Two new properties ActualAxisMinimum a...HB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.02: Added .mov support.jPageFlipper: jPageFlipper 0.9: This is an initial community preview of jPageFlipper. It's not ready for production usage but has almost all functionality implemented.linq.js - LINQ for JavaScript: ver 2.1.0.0: Add Class Dictionary Lookup Grouping OrderedEnumerable Add Method ToDictionary MemoizeAll Share Let Add Overload ...Microsoft Research Biology Extension for Excel: MSR Biology Extension for Excel - M9: M9 Release includes the following updates to the previous release: > Import / Export support from Excel for multiple file formats > Bug fixes and ...Nifty CSharp Tools: Event Watcher: Event Watcher!Paint.NET Bulk Image Processor: Paint.NET Bulk Image Processor v1.0: This is the initial release of the Paint.NET Bulk Image processor plugin. All feedback is welcome.PiPiBugNet: PiPiBugNet架构设计: PiPiBugNet架构设计,未包含功能实现RuneScape emulation done right.: rc0: Release cantidate 0.Rx Contrib: V1.6: Adding CCR queue as adapter for the ReactiveQueue credits goes to Yuval Mazor http://blogs.microsoft.co.il/blogs/yuvmaz/Silverlight Metro Theme: Silverlight Metro Theme Alpha 1: Silverlight Metro Theme Alpha 1Silverlight Stereoscopy: Silverlight Stereoscopy Alpha 1: Silverlight Stereoscopy Alpha 20100518Stratosphere: Stratosphere 1.0.6.0: Introduced support for batch put Introduced Support for conditional updates and consistent read Added support for select conditions Brought t...VCC: Latest build, v2.1.30518.0: Automatic drop of latest buildVideo Downloader: Example Program - 1.1: Example Program showing the features of the DLL and what can be achieved using it. For DLL Version 1.1.Video Downloader: Version 1.1: Version 1.1 See Home Page for usage and more information regarding new features. Please remember changes at You-Tube can prevent this software from...WatchersNET.TagCloud: WatchersNET.TagCloud 01.06.00: Whats New New Tag Mode: Show Tags from Ventrian.com NewsArticles Module New Tag Mode: Show Tags from Ventrian.com SimpleGallery Module Hyperlin...Windows Double Explorer: WDE v0.4: -optimization -switch to new vst2010 -viewer close now by pressing escape -reorder tabs -send selected fullname or shortnames via email (eye button...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelGMap.NET - Great Maps for Windows Forms & PresentationCustomer Portal Accelerator for Microsoft Dynamics CRMBlogEngine.NETWindows Azure Command-line Tools for PHP DevelopersCassiniDev - Cassini 3.5/4.0 Developers EditionSQL Server PowerShell ExtensionsFluent Ribbon Control Suite

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Outlook 2013 keeps freezing, semi-consistently

    - by AviD
    I have an oddity of problem with my Outlook's stability. It seems to be freezing up, not at random intervals, but based on a seemingly strange combination of configurations. I have been trying many different combinations, I've even devolved to "Cargo-cult" debugging, since I have no clue what is causing this... Here is my set up - since I don't know for sure which settings are causing the lockup, I'll probably mention irrelevant things: (relatively) clean install of Windows 8 (on hyper-v, if that matters) Clean install of Outlook 2013, fully updated 3 accounts configured: Hotmail account configured with ActiveSync Gmail account Large-ish account (several GB) connected with IMAP Only a few folders are subscribed in IMAP Outlook is set to only display subscribed folders configured to keep messages permanently Google Apps account, connected with IMAP Small account connected with IMAP All folders IMAP subscribed Outlook is set to only display subscribed folders configured to keep messages permanently Several Send/Receive Groups configured, to try different configurations of enabling/disable/partial the different accounts - with different send times, from 60 minutes down to 5 minutes. The problem is that at certain points Outlook completely freezes up and I have to kill it. This is not consistent - there are some things that cause it immediately almost consistently, there are some times that it just happens by itself after some period of time (sometimes a few moments, sometimes a few hours; sometimes while using it, sometimes after I've been away from it for a few hours). I have searched all over, and there seem to be many with similar (apparently) problem, and found numerous "solutions" (some even more cargocultish than mine), but so far none of them worked. I've removed all the accounts, both all together and one at a time, and re-configured them - eventually it freezes up. I've tried uninstalling Outlook, cleaning it up completely - removing files, app settings, registry keys, etc - then reinstalling - eventually it freezes up. I've only enabled the Hotmail account, disabling (but not removing) the Google accounts - apparently this does not lock up. I've enabled the Hotmail and the Gmail accounts, leaving the Apps one disabled - it seems like it does not lock up. With all accounts enabled, it locks up almost immediately after doing a send/receive. With only the Apps account enabled, it seems to not lock up. With the Hotmail and the Apps accounts enabled (Gmail disabled), it seems like it locks up after a random amount of time. With Hotmail enabled, and Gmail and Apps both enabled but set to receive only custom folder downloading (not all subscribed folders) - sometimes it locks up right after a send/receive, sometimes it goes for hours without locking up, and sometimes it only locks up when I send an email. I've tried switching the ports for the Google accounts (SSL/465 vs TLS/587), though I have no idea if this should affect, but no real difference. In short, I honestly have no idea what is actually causing Outlook to lock up, I might be completely barking up the wrong tree. At this point I don't really know what else to try, I'm flipping switches at random here. I would like to have all 3 accounts enabled, ideally in several groups (e.g. pull down only important folders in a group with short interval, and all other folders in a longer interval) - obviously without freezing up at all. I've tried putting in all the important details, if there is anything else important to add please let me know. Another issue that occurred to me might also be connected - the Google accounts don't always synchronize properly, even after a send/receive or "update folder". At least not consistently... though I haven't been able to find a significant connection between this and that.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • Windows Azure – Write, Run or Use Software

    - by BuckWoody
    Windows Azure is a platform that has you covered, whether you need to write software, run software that is already written, or Install and use “canned” software whether you or someone else wrote it. Like any platform, it’s a set of tools you can use where it makes sense to solve a problem. The primary location for Windows Azure information is located at http://windowsazure.com. You can find everything there from the development kits for writing software to pricing, licensing and tutorials on all of that. I have a few links here for learning to use Windows Azure – although it’s best if you focus not on the tools, but what you want to solve. I’ve got it broken down here into various sections, so you can quickly locate things you want to know. I’ll include resources here from Microsoft and elsewhere – I use these same resources in the Architectural Design Sessions (ADS) I do with my clients worldwide. Write Software Also called “Platform as a Service” (PaaS), Windows Azure has lots of components you can use together or separately that allow you to write software in .NET or various Open Source languages to work completely online, or in partnership with code you have on-premises or both – even if you’re using other cloud providers. Keep in mind that all of the features you see here can be used together, or independently. For instance, you might only use a Web Site, or use Storage, but you can use both together. You can access all of these components through standard REST API calls, or using our Software Development Kit’s API’s, which are a lot easier. In any case, you simply use Visual Studio, Eclipse, Cloud9 IDE, or even a text editor to write your code from a Mac, PC or Linux.  Components you can use: Azure Web Sites: Windows Azure Web Sites allow you to quickly write an deploy websites, without setting a Virtual Machine, installing a web server or configuring complex settings. They work alone, with other Windows Azure Web Sites, or with other parts of Windows Azure. Web and Worker Roles: Windows Azure Web Roles give you a full stateless computing instance with Internet Information Services (IIS) installed and configured. Windows Azure Worker Roles give you a full stateless computing instance without Information Services (IIS) installed, often used in a "Services" mode. Scale-out is achieved either manually or programmatically under your control. Storage: Windows Azure Storage types include Blobs to store raw binary data, Tables to use key/value pair data (like NoSQL data structures), Queues that allow interaction between stateless roles, and a relational SQL Server database. Other Services: Windows Azure has many other services such as a security mechanism, a Cache (memcacheD compliant), a Service Bus, a Traffic Manager and more. Once again, these features can be used with a Windows Azure project, or alone based on your needs. Various Languages: Windows Azure supports the .NET stack of languages, as well as many Open-Source languages like Java, Python, PHP, Ruby, NodeJS, C++ and more.   Use Software Also called “Software as a Service” (SaaS) this often means consumer or business-level software like Hotmail or Office 365. In other words, you simply log on, use the software, and log off – there’s nothing to install, and little to even configure. For the Information Technology professional, however, It’s not quite the same. We want software that provides services, but in a platform. That means we want things like Hadoop or other software we don’t want to have to install and configure.  Components you can use: Kits: Various software “kits” or packages are supported with just a few clicks, such as Umbraco, Wordpress, and others. Windows Azure Media Services: Windows Azure Media Services is a suite of services that allows you to upload media for encoding, processing and even streaming – or even one or more of those functions. We can add DRM and even commercials to your media if you like. Windows Azure Media Services is used to stream large events all the way down to small training videos. High Performance Computing and “Big Data”: Windows Azure allows you to scale to huge workloads using a few clicks to deploy Hadoop Clusters or the High Performance Computing (HPC) nodes, accepting HPC Jobs, Pig and Hive Jobs, and even interfacing with Microsoft Excel. Windows Azure Marketplace: Windows Azure Marketplace offers data and programs you can quickly implement and use – some free, some for-fee.   Run Software Also known as “Infrastructure as a Service” (IaaS), this offering allows you to build or simply choose a Virtual Machine to run server-based software.  Components you can use: Persistent Virtual Machines: You can choose to install Windows Server, Windows Server with Active Directory, with SQL Server, or even SharePoint from a pre-configured gallery. You can configure your own server images with standard Hyper-V technology and load them yourselves – and even bring them back when you’re done. As a new offering, we also even allow you to select various distributions of Linux – a first for Microsoft. Windows Azure Connect: You can connect your on-premises networks to Windows Azure Instances. Storage: Windows Azure Storage can be used as a remote backup, a hybrid storage location and more using software or even hardware appliances.   Decision Matrix With all of these options, you can use Windows Azure to solve just about any computing problem. It’s often hard to know when to use something on-premises, in the cloud, and what kind of service to use. I’ve used a decision matrix in the last couple of years to take a particular problem and choose the proper technology to solve it. It’s all about options – there is no “silver bullet”, whether that’s Windows Azure or any other set of functions. I take the problem, decide which particular component I want to own and control – and choose the column that has that box darkened. For instance, if I have to control the wiring for a solution (a requirement in some military and government installations), that means the “Networking” component needs to be dark, and so I select the “On Premises” column for that particular solution. If I just need the solution provided and I want no control at all, I can look as “Software as a Service” solutions. Security, Pricing, and Other Info  Security: Security is one of the first questions you should ask in any distributed computing environment. We have certification info, coding guidelines and more, even a general “Request for Information” RFI Response already created for you.   Pricing: Are there licenses? How much does this cost? Is there a way to estimate the costs in this new environment? New Features: Many new features were added to Windows Azure - a good roundup of those changes can be found here. Support: Software Support on Virtual Machines, general support.    

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • How to troubleshoot problem with OpenVPN Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenVPN Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • Reading XML Content

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; using System.Diagnostics; using System.Threading; using System.Xml; using System.Reflection; namespace XMLReading { class Program     { static void Main(string[] args)         { string fileName = @"C:\temp\t.xml"; List<EmergencyContactXMLDTO> emergencyContacts = new XmlReader<EmergencyContactXMLDTO, EmergencyContactXMLDTOMapper>().Read(fileName); foreach (var item in emergencyContacts)             { Console.WriteLine(item.FileNb);             }          }     } public class XmlReader<TDTO, TMAPPER> where TDTO : BaseDTO, new() where TMAPPER : PCPWXMLDTOMapper, new()     { public List<TDTO> Read(String fileName)         { XmlTextReader reader = new XmlTextReader(fileName); List<TDTO> emergencyContacts = new List<TDTO>(); while (true)             {                 TMAPPER mapper = new TMAPPER(); bool isFound = SeekElement(reader, mapper.GetMainXMLTagName()); if (!isFound) break;                 TDTO dto = new TDTO(); foreach (var propertyKey in mapper.GetPropertyXMLMap())                 { String dtoPropertyName = propertyKey.Key; String xmlPropertyName = propertyKey.Value;                     SeekElement(reader, xmlPropertyName);                     SetValue(dto, dtoPropertyName, reader.ReadElementString());                 }                 emergencyContacts.Add(dto);             } return emergencyContacts;         } private void SetValue(Object dto, String propertyName, String value)         { PropertyInfo prop = dto.GetType().GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance);             prop.SetValue(dto, value, null);         } private bool SeekElement(XmlTextReader reader, String elementName)         { while (reader.Read())             { XmlNodeType nodeType = reader.MoveToContent(); if (nodeType != XmlNodeType.Element)                 { continue;                 } if (reader.Name == elementName)                 { return true;                 }             } return false;         }     } public class BaseDTO     {     } public class EmergencyContactXMLDTO : BaseDTO     { public string FileNb { get; set; } public string ContactName { get; set; } public string ContactPhoneNumber { get; set; } public string Relationship { get; set; } public string DoctorName { get; set; } public string DoctorPhoneNumber { get; set; } public string HospitalName { get; set; }     } public interface PCPWXMLDTOMapper     { Dictionary<string, string> GetPropertyXMLMap(); String GetMainXMLTagName();     } public class EmergencyContactXMLDTOMapper : PCPWXMLDTOMapper     { public Dictionary<string, string> GetPropertyXMLMap()         { return new Dictionary<string, string>             {                 { "FileNb", "XFileNb" },                 { "ContactName", "XContactName"},                 { "ContactPhoneNumber", "XContactPhoneNumber" },                 { "Relationship", "XRelationship" },                 { "DoctorName", "XDoctorName" },                 { "DoctorPhoneNumber", "XDoctorPhoneNumber" },                 { "HospitalName", "XHospitalName" },             };         } public String GetMainXMLTagName()         { return "EmergencyContact";         }     } } span.fullpost {display:none;}

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Microsoft Enterprise Library Caching Application Block not thread safe?!

    - by AlanR
    Good aftenoon, I created a super simple console app to test out the Enterprise Library Caching Application Block, and the behavior is blaffling. I'm hoping I screwed something that's easy to fix in the setup. Have each item expire after 5 seconds for testing purposes. Basic setup -- "every second pick a number between 0 and 2. if the cache doesn't already have it, put it in there -- otherwise just grab it from the cache. Do this inside a LOCK statement to ensure thread safety. APP.CONFIG: <configuration> <configSections> <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <cachingConfiguration defaultCacheManager="Cache Manager"> <cacheManagers> <add expirationPollFrequencyInSeconds="1" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Cache Manager" /> </cacheManagers> <backingStores> <add encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Null Storage" /> </backingStores> </cachingConfiguration> </configuration> C#: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Caching; using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations; namespace ConsoleApplication1 { class Program { public static ICacheManager cache = CacheFactory.GetCacheManager("Cache Manager"); static void Main(string[] args) { while (true) { System.Threading.Thread.Sleep(1000); // sleep for one second. var key = new Random().Next(3).ToString(); string value; lock (cache) { if (!cache.Contains(key)) { cache.Add(key, key, CacheItemPriority.Normal, null, new SlidingTime(TimeSpan.FromSeconds(5))); } value = (string)cache.GetData(key); } Console.WriteLine("{0} --> '{1}'", key, value); //if (null == value) throw new Exception(); } } } } OUPUT -- How can I prevent the cache to returning nulls? 2 --> '2' 1 --> '1' 2 --> '2' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 0 --> '0' 1 --> '1' 2 --> '' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 2 --> '2' 1 --> '1' Press any key to continue . . . Thanks in advance, -Alan.

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# Bind DataTable to Existing DataGridView Column Definitions

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { PerformQuery(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); UpdateResults(entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.AddRange(new DataColumn[] { new DataColumn("event_type", typeof(Int32)), new DataColumn("event_time", typeof(DateTime)), new DataColumn("event_detail", typeof(String))}); using (SqlConnection conn = new SqlConnection(DSN)) { using (SqlDataAdapter adapter = new SqlDataAdapter( "SELECT event_type, event_time, event_detail FROM event_log " + "WHERE event_time >= @start AND event_time <= @stop", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new SqlParameter("@start", start), new SqlParameter("@stop", stop)}); adapter.Fill(entries); } } return entries; } Update I'd like to summarize and provide some additional information I've learned from the discussion here and debugging efforts since I originally posted this question. I am refactoring old code that retrieved records from a database, collected those records as an array, and then later iterated through the array to populate a DataGridView row by row. Threading was originally implemented to compensate and keep the UI responsive during the unnecessary looping. I have since stripped out Thread/Invoke; everything now occurs on the same execution thread (thank you, Sam). I am attempting to replace the slow, unwieldy approach using a DataTable which I can fill with a DataAdapter, and assign to the DataGridView through it's DataSource property (above code updated). I've iterated through the entries DataTable's rows to verify the table contains the expected data before assigning it as the DataGridView's DataSource. foreach (DataRow row in entries.Rows) { System.Diagnostics.Trace.WriteLine( String.Format("{0} {1} {2}", row[0], row[1], row[2])); } One of the column of the DataGridView is a custom DataGridViewColumn to stylize the event_type value. I apologize I didn't mention this before in the original post but I wasn't aware it was important to my problem. I have converted this column temporarily to a standard DataGridViewTextBoxColumn control and am no longer experiencing the Exception. The fields in the DataTable are appended to the list of fields that have been pre-specified in Design view of the DataGridView. The records' values are being populated in these appended fields. When the run time attempts to render the cell a null value is provided (as the value that should be rendered is done so a couple columns over). In light of this, I am re-titling and re-tagging the question. I would still appreciate it if others who have experienced this can instruct me on how to go about binding the DataTable to the existing column definitions of the DataGridView.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92  | Next Page >