Search Results

Search found 18096 results on 724 pages for 'let'.

Page 711/724 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Django doesn't refresh my request object when reloading the current page.

    - by Boris Rusev
    I have a Django web site which I want ot be viewable in different languages. Until this morning everything was working fine. Here is the deal. I go to my say About Us page and it is in English. Below it there is the change language button and when I press it everything "magically" translates to Bulgarian just the way I want it. On the other hand I have a JS menu from which the user is able to browse through the products. I click on 'T-Shirt' then a sub-menu opens bellow the previously pressed containing different categories - Men, Women, Children. The link guides me to a page where the exact clothes I have requested are listed. BUT... When I try to change the language THEN, nothing happens. I go to the Abouts Page, change the language from there, return to the clothes catalog and the language is changed... I will no paste some code. This is my change button code: function changeLanguage() { if (getCookie('language') == 'EN') { setCookie("language", 'BG'); } else { setCookie("language", 'EN'); } window.location.reload(); } These are my URL patterns: urlpatterns = patterns('', # Example: # (r'^enter_clothing/', include('enter_clothing.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^site_media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/home/boris/Projects/enter_clothing/templates/media', 'show_indexes': True}), (r'^$', 'enter_clothing.clothes_app.views.index'), (r'^home', 'enter_clothing.clothes_app.views.home'), (r'^products', 'enter_clothing.clothes_app.views.products'), (r'^orders', 'enter_clothing.clothes_app.views.orders'), (r'^aboutUs', 'enter_clothing.clothes_app.views.aboutUs'), (r'^contactUs', 'enter_clothing.clothes_app.views.contactUs'), (r'^admin/', include(admin.site.urls)), (r'^(\w+)/(\w+)/page=(\d+)', 'enter_clothing.clothes_app.views.displayClothes'), ) My About Us page: @base def aboutUs(request): return """<b>%s</b>""" % getTranslation("About Us Text", request.COOKIES['language']) The @base method: def base(myfunc): def inner_func(*args, **kwargs): try: args[0].COOKIES['language'] except: args[0].COOKIES['language'] = 'BG' resetGlobalVariables() initCollections(args[0]) categoriesByCollection = dict((collection, getCategoriesFromCollection(args[0], collection)) for collection in collections) if args[0].COOKIES['language'] == 'BG': for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (translateCategory(args[0], x), translateCollection(args[0], k), str(x)), v), "") else: for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (str(x), str(k), str(x)), v), "") contents = myfunc(*args, **kwargs) return render_to_response('index.html', {'title': title, 'categoriesByCollection': categoriesByCollection.iteritems(), 'keys': enumerate(keys), 'values': enumerate(values), 'contents': contents, 'btnHome':getTranslation("Home Button", args[0].COOKIES['language']), 'btnProducts':getTranslation("Products Button", args[0].COOKIES['language']), 'btnOrders':getTranslation("Orders Button", args[0].COOKIES['language']), 'btnAboutUs':getTranslation("About Us Button", args[0].COOKIES['language']), 'btnContacts':getTranslation("Contact Us Button", args[0].COOKIES['language']), 'btnChangeLanguage':getTranslation("Button Change Language", args[0].COOKIES['language'])}) return inner_func And the catalog page: @base def displayClothes(request, category, collection, page): clothesToDisplay = getClothesFromCollectionAndCategory(request, category, collection) contents = "" pageCount = len(clothesToDisplay) / ( rowCount * columnCount) + 1 matrixSize = rowCount * columnCount currentPage = str(page).replace("page=", "") currentPage = int(currentPage) - 1 #raise Exception(request) # this is for the clothes layout for x in range(currentPage * matrixSize, matrixSize * (currentPage + 1)): if x < len(clothesToDisplay): if request.COOKIES['language'] == 'EN': contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getEnglishHTML() else: contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getBulgarianHTML() if (x + 1) % columnCount == 0: contents += """<div class="clear"></div>""" contents += """<div class="clear"></div>""" # this is for the page links if pageCount > 1: for x in range(0, pageCount): if x == currentPage: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: black;">%s</span></a>""" % (category, collection, x + 1, x + 1) else: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: blue;">%s</span></a>""" % (category, collection, x + 1, x + 1) return """%s""" % (contents) Let me explain that you needn't be alarmed by the large quantities of code I have posted. You don't have to understand it or even look at all of it. I've published it just in case because I really can't understand the origins of the bug. Now this is how I have narrowed the problem. I am debuging with "raise Exception(request)" every time I want to know what's inside my request object. When I place this in my aboutUs method, the language cookie value changes every time I press the language button. But NOT when I am in the displayClothes method. There the language stays the same. Also I tried putting the exception line in the beginning of the @base method. It turns out the situation there is exactly the same. When I am in my About Us page and click on the button, the language in my request object changes, but when I press the button while in the catalog page it remains unchanged. That is all I could find, and I have no idea as to how Django distinguishes my pages and in what way. P.S. The JavaScript I think works perfectly, I have tested it in multiple ways. Thank you, I hope some of you will read this enormous post, and don't hesitate to ask for more code excerpts.

    Read the article

  • CodePlex Daily Summary for Tuesday, December 07, 2010

    CodePlex Daily Summary for Tuesday, December 07, 2010Popular ReleasesMy Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Aura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...New ProjectsAcorn: Little acorns lead to mighty oaks.Algorithmia: Algorithm and data-structure library for .NET 3.5 and up. Algorithmia contains sophisticated algorithms and data-structures like graphs, priority queues, command, undo-redo and more. Base Station Verification system: Base Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station VerificatioBlueAd: Simple app to broadcast messages to bluetooth enabled devicesBuiltWith Fiddler Integration: Project Description BuiltWithFiddler adds BuildWith functionality to the HTTP Debugging Proxy Fiddler. It helps to determine the underlying technologies used in HTTP responses. www.builtwith.com www.fiddler2.com It is written in C# by Andy at Bare Web BVCMS.app: The Bellevue Church Management System is a complete Web-based application for managing your church. This iPhone app provides tools to connect to bvcms so that users can search, check-in members, and other actions.coffeeGreet: CoffeeGreet is a WordPress plug-in that will greet your visitors with coffee depending on the hour of the day, by displaying images using the Flickr API.DCEL data structure: Doubly-connected edge list data structure implementation in C#.El Bruno ClickOnce Demo: Demo de ClickOnce en CodePlexFiren's Laboratory: NothingFunCam: A fun application for playing with your webcam. Experiment with different overlays and exciting effects. Save the images when you want, or on a timer. Great fun for parties! (WPF/C#) Uses WPF Media Kit for webcam integration, and Shazzam for the great shader effects.GammaJul LgLcd: A .NET wrapper around the Logitech SDK for G15/G19 keyboard screens. Supports raw byte sending, GDI+ drawing and rendering WPF elements onto the screen.Getting Started CodePlex: This is a demo for using TFS in CodePlexGPUG (Dynamics GP User Group): The location for GPUG members to share code.HPMC: DemoImageOfMeLocator: Team Boarders Platform: WordPress Objectives: 1. Create a plugin for WordPress. 2. Create a plugin that allows users to browse images uploaded on their Flickr Account and use them as overlays for store locations on a large map. 3. Create a plugjDepot: jQuery ajax, jquery UI and ASP.NET MVC based online store application. This software will let a user manage their product inventory by exposiing CRUD operations through the UI. Customers can buy these products and track each shipment separately. It is developed in C#.JQuery Cycle Carousel for DotNetNuke®: DNN Module JQuery Cycle Carousel This module will show images as a carousel using the cycle JQuery plugin. You can easyly change Cycle effect and other settings in the module.Local Movie DB in C#: C# WPF project. Will create local movie database where users can create their own DB of the movies they own/seen/liked ... etcLocation Framework for Windows Phone 7 and Windows Azure: A framework to build location based applications with Windows Phone 7 and Windows Azure.OraLibs: Collection of useful PL/SQL procedures, which contain methods for working with arrays, strings, numbers, dates.Phyo: License managementRepositório de Monografias: O Repositório de Monografias terá como função: - Salvar em um repositório todas as monografias postadas no período pelos os alunos da FACISA/FCM/ESAC. - O administrador do sistema, fará uma avaliação de acordo com ABNT e retornará para o aluno as nescessárias correções.Secure SharePoint Silverlight Web Part - Silverlight Security & Auditing: The Secure Silverlight WebPart provides both builtin security using default SharePoint security mechanisms as well as site collection specific auditing to record an event a Silverlight file is newly hosted in the SharePoint environment. SilverlightColorPicker: Photoshop like ColorPicker built in silverlight from scratchSparrow.Net Connect: This is a passport system.Sparrow.NET TaskMe: TaskMe is a project management web application.Written using Sparrow.Net frameworkSQLiteWrapper: A light c# wrapper around the sqlite library's functionsSuperMarioBros.Net: A .Net Super Mario Bros clon.Virtualizing Tree View: Tree View for large amount of itemsWindows Forms GUI based Trace Listener: Gives a simple UI based Trace Listener to debug / Trace information . No need to look at EventLog / Xml file etc. This code Library helps you View the Trace and debug entries. Can plug in to your WinForms App as well.WP Socially Related: Automatically include related posts from Twitter, WordPress.com and Bing Search into each of your blog posts

    Read the article

  • CodePlex Daily Summary for Thursday, February 24, 2011

    CodePlex Daily Summary for Thursday, February 24, 2011Popular ReleasesInstant Feature Builder for Visual Studio 2010: Instant Feature Builder 1.2: This is the binary version of the Instant Feature Builder. Once downloaded, double click to install into Visual Studio. Version 1.2 fixes: Rename FX to IFB to shorten path lengths Fix issue in ExecuteCommand Fix issue to workaround problem with VS Template writer The Instant Feature Builder is a tool which enables you, via drag-and-drop, to build a specific type of Visual Studio extension (VSIX) known as a Feature Extension. A Feature Extension packages project and/or item templates,...DirectQ: Release 1.8.7 (Beta): Beta release of 1.8.7 to get feedback on what works well, what doesn't work well, and what doesn't work at all. D3D9 hardware with ps2.0 a must. Faster, more streamlined and more integrated rendering capabilities with additional MP features and support.Smartkernel: Smartkernel: ????,??????Chiave File Encryption: Chiave 0.9.1: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Change Log from 0.9 Beta to 0.9.1: ======================= >Added option for system shutdown, sleep, hibernate after operation completed. >Minor Changes to the UI. >Numerous Bug fixes. Feedbacks are Welcome!....DotNetNuke® Store: 03.00.00: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! This version is the same code base as the version 02.01.51 RC, just some cleaning and source code release before submition to the release tracker for "official" release.ClosedXML - The easy way to OpenXML: ClosedXML 0.45.2: New on this release: 1) Added data validation. See Data Validation 2) Deleting or clearing cells deletes the hyperlinks too. New on v0.45.1 1) Fixed issues 6237, 6240 New on v0.45.2 1) Fixed issues 6257, 6266 New Examples Data ValidationOMEGA CMS: OMEGA CMA - Alpha 0.2: A few fixes for OMEGA Framework (DLL) A few tweeks for OMEGA CMSCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Xen: Graphics API for XNA: Xen 2.0: This is the final release of Xen; Xen 2.0. Xen 2.0 supports PC and Xbox 360 running XNA 4. The documentation download is coming soon Due to restrictions in XNA 4, Building Xen requires a DirectX 10 capable video card (Xen applications can still run on Windows Xp and DirectX 9 video cards)Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight??????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...New ProjectsCompetition Management Platform: Paragliding Competition Management PlatformCS424 B2 Group - Car Management: CS424 B2 Group - Car ManagementEMS: The main aim of the system to perform environment inventorization.EVVA: EVVA ist ein Softwareprojekt zur Unterstützung privater Arbeitsvermittler FileSocialVB: A library to work with the filesocial.com web site and API through VB.NET. This is actually an offshoot of the http://twittervb.codeplex.com project. Though smaller, this will lead to a more compact library.Hash Crack - IGProgram: Hash Crack is a software program for hashes and passwords cracking. Hash Crack use dictionary or set of symbols for hashes cracking, and also support pwdump file format for Windows passwords cracking NTHash MD4. MD2, MD4, MD5, SHA1, SHA256, SHA384, SHA512Imail Spammer Killer: Bots seem to love picking off the weak passwords of IpSwitch IMail v10 users and using SMTP auth to spam the world with their new found account access. This project is a windows service to isolate and stop this activity by disabling the violated user account as it occurs.JVS2USB: Montage permettant de relier une IO ( capcom / Sega ) sur un PC via l'USB !!Library Reminder: Library ReminderMAVI: mobile application for the visually impaired: bill recognition & tag and recognize objects based on a specific stickerMessage splliting without envelope in Biztalk 2009: Message splitting without envelope in Biztalk 2009. The project contains: - Source Code - Examples Article describing how to make it:Microsoft translator: Language translator designed to test Microsoft Translator web service API In Windows Phone 7 developed using Visual Studio 2010 in C#mrmuffin: Mr Muffin WP7 game for childrenNetwork Monitor: Simple application with a vu-meter style display of recent incoming network traffic. - Requires .NET 4, - Requires WinPCap (http://www.winpcap.org/) - Only tested to run under Windows 7NPhysics: NPhysics - Physical Data Types for .NETOpen SimRacing Results: Open SimRacing Results aims to provide an open standard for race results of PC SimRacing games (like iRacing, rFactor, NR2003, ...), allowing developers of league management systems to use a unique interface to get the results from, regardless of the simulation used.Orchard Image Gallery: Orchard Image Gallery project is intented to provide a Image Gallery content part and/or widget for the Orchard Project.Planning Poker for Windows Phone 7: Play planning poker on your Windows Phone 7.Publishing and consuming WCF in Biztalk 2009 and Visual Studio 2008: The file contains: Biztalk 2009 Project, C# Console Project, Example. Push Notification for Windows Phone 7 in php: WindowsPhonePushNotification enables you to use Microsoft Push Notification Service in phpQuksace Agjke: For more information, please visit <http://students.cs.tamu.edu/abe/IS_and_R/HW3/quksace%20agjke.html>. http://students.cs.tamu.edu/abe/IS_and_R/HW3/quksace%20agjke.html Read: a "GNU Make"-like utility for viewing Readme files: Read is a simple program to easily load and read README files on a UNIX-compliant system. It works in a similar way to GNU Make by searching a directory for a compatible file, in this case a Readme file, and loading it for reading using a text editor or viewer.SQL CE 3.5 persistence mapper for ECO IV: May need some adjustments for ECO V/VI, definitely needs some rewrite to support SQL CE 4.0 fullySQL Server Job Failure Notification System: A simple means of ensuring you know when SQLAgent jobs fail.SuperQuery: SuperQuery makes it easy to run the same batch of SQL across several databases on different SQL servers. SuperQuery supports all editions and versions of SQL Server from 2000 onwards. It is developed in C# using .NET 4 Client Profile.System.Net.Mail Extended: An Extension to the System.Net.Mail Namespace, adding a POP3 Client, Enhanced SMTP Client, IMAP Client, POP3 Server, SMTP Server, and IMAP Serve, all written in Visual C#.wow-combatlogs: A PowerShell module for working with combat logs generated by World of Warcraft.WPF UndoManager: WPF UndoManager provides a simple Undomanager on the base of WPF's CommandPattern. It use an implementation of the ICommand-interface to manage a history of actions.XO / TicTacToe game: Dynamic sized XO / TicTacToe game for Windows Phone 7 Build using Visual Studio 2010 and C#

    Read the article

  • In apache cxf, How do i know the soap request message is gzip compressed?

    - by aspirant75
    I'm using Apache CXF to send soap message. in specific case, i have to send a soap message gzip compressed. Using log4j, i printed detailed info. would you let me know how i can know the message is gzip compressed and transfered to server. thanks in advance. Below is my java code for gzip and log info. java code Client cxfClient = ClientProxy.getClient(port); /** Logging Interceptor */ cxfClient.getInInterceptors().add(new GZIPInInterceptor()); cxfClient.getOutInterceptors().add(new GZIPOutInterceptor()); log info 20120814 18:56:15,351 DEBUG Interceptors contributed by bus: [] 20120814 18:56:15,351 DEBUG Interceptors contributed by client: [org.apache.cxf.transport.http.gzip.GZIPOutInterceptor@1682a53] 20120814 18:56:15,351 DEBUG Interceptors contributed by endpoint: [org.apache.cxf.interceptor.MessageSenderInterceptor@1b2d7df, org.apache.cxf.jaxws.interceptors.SwAOutInterceptor@7a9224, org.apache.cxf.jaxws.interceptors.WrapperClassOutInterceptor@110b640, org.apache.cxf.jaxws.interceptors.HolderOutInterceptor@2d59a3] 20120814 18:56:15,351 DEBUG Interceptors contributed by binding: [org.apache.cxf.interceptor.AttachmentOutInterceptor@158015a, org.apache.cxf.interceptor.StaxOutInterceptor@c0c8b5, org.apache.cxf.binding.soap.interceptor.SoapHeaderOutFilterInterceptor@b914b3, org.apache.cxf.interceptor.BareOutInterceptor@fdfc58, org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor@c22a3b, org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor@1629e71] 20120814 18:56:15,351 DEBUG Interceptors contributed by databinding: [] 20120814 18:56:15,357 DEBUG Adding interceptor org.apache.cxf.transport.http.gzip.GZIPOutInterceptor@1682a53 to phase prepare-send 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor@1b2d7df to phase prepare-send 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.jaxws.interceptors.SwAOutInterceptor@7a9224 to phase pre-logical 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.jaxws.interceptors.WrapperClassOutInterceptor@110b640 to phase pre-logical 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.jaxws.interceptors.HolderOutInterceptor@2d59a3 to phase pre-logical 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.interceptor.AttachmentOutInterceptor@158015a to phase pre-stream 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.interceptor.StaxOutInterceptor@c0c8b5 to phase pre-stream 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.binding.soap.interceptor.SoapHeaderOutFilterInterceptor@b914b3 to phase pre-logical 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.interceptor.BareOutInterceptor@fdfc58 to phase marshal 20120814 18:56:15,358 DEBUG Adding interceptor org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor@c22a3b to phase post-logical 20120814 18:56:15,359 DEBUG Adding interceptor org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor@1629e71 to phase write 20120814 18:56:15,360 DEBUG Chain org.apache.cxf.phase.PhaseInterceptorChain@31688f was created. Current flow: pre-logical [HolderOutInterceptor, SwAOutInterceptor, WrapperClassOutInterceptor, SoapHeaderOutFilterInterceptor] post-logical [SoapPreProtocolOutInterceptor] prepare-send [MessageSenderInterceptor, GZIPOutInterceptor] pre-stream [AttachmentOutInterceptor, StaxOutInterceptor] write [SoapOutInterceptor] marshal [BareOutInterceptor] 20120814 18:56:15,361 DEBUG Invoking handleMessage on interceptor org.apache.cxf.jaxws.interceptors.HolderOutInterceptor@2d59a3 20120814 18:56:15,361 DEBUG op: [OperationInfo: {https://asp.cyberbooking.co.kr/TopasApiSvc/services}getAirAvail] 20120814 18:56:15,361 DEBUG op.hasOutput(): true 20120814 18:56:15,361 DEBUG op.getOutput().size(): 2 20120814 18:56:15,361 DEBUG Invoking handleMessage on interceptor org.apache.cxf.jaxws.interceptors.SwAOutInterceptor@7a9224 20120814 18:56:15,364 DEBUG Invoking handleMessage on interceptor org.apache.cxf.jaxws.interceptors.WrapperClassOutInterceptor@110b640 20120814 18:56:15,364 DEBUG Invoking handleMessage on interceptor org.apache.cxf.binding.soap.interceptor.SoapHeaderOutFilterInterceptor@b914b3 20120814 18:56:15,365 DEBUG Invoking handleMessage on interceptor org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor@c22a3b 20120814 18:56:15,365 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor@1b2d7df 20120814 18:56:15,365 DEBUG Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor@dc9065 to phase prepare-send-ending 20120814 18:56:15,366 DEBUG Chain org.apache.cxf.phase.PhaseInterceptorChain@31688f was modified. Current flow: pre-logical [HolderOutInterceptor, SwAOutInterceptor, WrapperClassOutInterceptor, SoapHeaderOutFilterInterceptor] post-logical [SoapPreProtocolOutInterceptor] prepare-send [MessageSenderInterceptor, GZIPOutInterceptor] pre-stream [AttachmentOutInterceptor, StaxOutInterceptor] write [SoapOutInterceptor] marshal [BareOutInterceptor] prepare-send-ending [MessageSenderEndingInterceptor] 20120814 18:56:15,366 DEBUG Invoking handleMessage on interceptor org.apache.cxf.transport.http.gzip.GZIPOutInterceptor@1682a53 20120814 18:56:15,366 DEBUG Requestor role, so gzip enabled 20120814 18:56:15,366 DEBUG gzip permitted: YES 20120814 18:56:15,367 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.AttachmentOutInterceptor@158015a 20120814 18:56:15,367 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.StaxOutInterceptor@c0c8b5 20120814 18:56:15,370 DEBUG Adding interceptor org.apache.cxf.interceptor.StaxOutInterceptor$StaxOutEndingInterceptor@1f488f1 to phase pre-stream-ending 20120814 18:56:15,370 DEBUG Chain org.apache.cxf.phase.PhaseInterceptorChain@31688f was modified. Current flow: pre-logical [HolderOutInterceptor, SwAOutInterceptor, WrapperClassOutInterceptor, SoapHeaderOutFilterInterceptor] post-logical [SoapPreProtocolOutInterceptor] prepare-send [MessageSenderInterceptor, GZIPOutInterceptor] pre-stream [AttachmentOutInterceptor, StaxOutInterceptor] write [SoapOutInterceptor] marshal [BareOutInterceptor] pre-stream-ending [StaxOutEndingInterceptor] prepare-send-ending [MessageSenderEndingInterceptor] 20120814 18:56:15,370 DEBUG Invoking handleMessage on interceptor org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor@1629e71 20120814 18:56:15,383 DEBUG Adding interceptor org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor$SoapOutEndingInterceptor@1ce663c to phase write-ending 20120814 18:56:15,384 DEBUG Chain org.apache.cxf.phase.PhaseInterceptorChain@31688f was modified. Current flow: pre-logical [HolderOutInterceptor, SwAOutInterceptor, WrapperClassOutInterceptor, SoapHeaderOutFilterInterceptor] post-logical [SoapPreProtocolOutInterceptor] prepare-send [MessageSenderInterceptor, GZIPOutInterceptor] pre-stream [AttachmentOutInterceptor, StaxOutInterceptor] write [SoapOutInterceptor] marshal [BareOutInterceptor] write-ending [SoapOutEndingInterceptor] pre-stream-ending [StaxOutEndingInterceptor] prepare-send-ending [MessageSenderEndingInterceptor] 20120814 18:56:15,384 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.BareOutInterceptor@fdfc58 20120814 18:56:15,387 DEBUG Compressing message. 20120814 18:56:15,388 DEBUG Sending POST Message with Headers to http://test.co.kr:80/###/###/###Conduit :{https://test.co.kr/###/####}###.http-conduit Content-Type: text/xml; charset=UTF-8 20120814 18:56:15,388 DEBUG SOAPAction: "getAirAvail" 20120814 18:56:15,388 DEBUG Accept: */* 20120814 18:56:15,388 DEBUG Accept-Encoding: gzip;q=1.0, identity; q=0.5, *;q=0 20120814 18:56:15,388 DEBUG Content-Encoding: gzip 20120814 18:56:15,388 DEBUG No Trust Decider for Conduit '{https://test.co.kr/###/###}###.http-conduit'. An afirmative Trust Decision is assumed. 20120814 18:56:15,394 DEBUG Invoking handleMessage on interceptor org.apache.cxf.binding.soap.interceptor.SoapOutInterceptor$SoapOutEndingInterceptor@1ce663c 20120814 18:56:15,394 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.StaxOutInterceptor$StaxOutEndingInterceptor@1f488f1 20120814 18:56:15,394 DEBUG Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor@dc9065 20120814 18:56:15,459 DEBUG Response Code: 200 Conduit: {https://test.co.kr/###/###}###.http-conduit 20120814 18:56:15,459 DEBUG Content length: 11034 20120814 18:56:15,459 DEBUG Header fields: null: [HTTP/1.1 200 OK] Content-Language: [ko-KR] Date: [Tue, 14 Aug 2012 09:56:15 GMT] Content-Length: [11034] P3P: [CP='CAO PSA CONi OTR OUR DEM ONL'] Expires: [Thu, 01 Dec 1994 16:00:00 GMT] Keep-Alive: [timeout=10, max=100] Set-Cookie: [WMONID=mL6rq_Irpa_; Expires=Wed, 14 Aug 2013 09:56:15 GMT; Path=/] Connection: [Keep-Alive] Content-Type: [text/xml; charset=utf-8] Server: [IBM_HTTP_Server] Cache-Control: [no-cache="set-cookie, set-cookie2"]

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Windows Phone 7: Building a simple dictionary web client

    - by TechTwaddle
    Like I mentioned in this post a while back, I came across a dictionary web service called Aonaware that serves up word definitions from various dictionaries and is really easy to use. The services page on their website, http://services.aonaware.com/DictService/DictService.asmx, lists all the operations that are supported by the dictionary service. Here they are, Word Dictionary Web Service The following operations are supported. For a formal definition, please review the Service Description. Define Define given word, returning definitions from all dictionaries DefineInDict Define given word, returning definitions from specified dictionary DictionaryInfo Show information about the specified dictionary DictionaryList Returns a list of available dictionaries DictionaryListExtended Returns a list of advanced dictionaries (e.g. translating dictionaries) Match Look for matching words in all dictionaries using the given strategy MatchInDict Look for matching words in the specified dictionary using the given strategy ServerInfo Show remote server information StrategyList Return list of all available strategies on the server Follow the links above to get more information on each API. In this post we will be building a simple windows phone 7 client which uses this service to get word definitions for words entered by the user. The application will also allow the user to select a dictionary from all the available ones and look up the word definition in that dictionary. So of all the apis above we will be using only two, DictionaryList() to get a list of all supported dictionaries and DefineInDict() to get the word definition from a particular dictionary. Before we get started, a note to you all; I would have liked to implement this application using concepts from data binding, item templates, data templates etc. I have a basic understanding of what they are but, being a beginner, I am not very comfortable with those topics yet so I didn’t use them. I thought I’ll get this version out of the way and maybe in the next version I could give those a try. A somewhat scary mock-up of the what the final application will look like, Select Dictionary is a list picker control from the silverlight toolkit (you need to download and install the toolkit if you haven’t already). Below it is a textbox where the user can enter words to look up and a button beside it to fetch the word definition when clicked. Finally we have a textblock which occupies the remaining area and displays the word definition from the selected dictionary. Create a silverlight application for windows phone 7, AonawareDictionaryClient, and add references to the silverlight toolkit and the web service. From the solution explorer right on References and select Microsoft.Phone.Controls.Toolkit from under the .NET tab, Next, add a reference to the web service. Again right click on References and this time select Add Service Reference In the resulting dialog paste the service url in the Address field and press go, (url –> http://services.aonaware.com/DictService/DictService.asmx) once the service is discovered, provide a name for the NameSpace, in this case I’ve called it AonawareDictionaryService. Press OK. You can now use the classes and functions that are generated in the AonawareDictionaryClient.AonawareDictionaryService namespace. Let’s get the UI done now. In MainPage.xaml add a namespace declaration to use the toolkit controls, xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" the content of LayoutRoot is changed as follows, (sorry, no syntax highlighting in this post) <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,5,0,5">     <TextBlock x:Name="ApplicationTitle" Text="AONAWARE DICTIONARY CLIENT" Style="{StaticResource PhoneTextNormalStyle}"/>     <!--<TextBlock x:Name="PageTitle" Text="page name" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"/>--> </StackPanel> <!--ContentPanel - place additional content here--> <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">     <Grid.RowDefinitions>         <RowDefinition Height="Auto"/>         <RowDefinition Height="Auto"/>         <RowDefinition Height="*"/>     </Grid.RowDefinitions>     <toolkit:ListPicker Grid.Column="1" x:Name="listPickerDictionaryList"                         Header="Select Dictionary :">     </toolkit:ListPicker>     <Grid Grid.Row="1" Margin="0,5,0,0">         <Grid.ColumnDefinitions>             <ColumnDefinition Width="*"/>             <ColumnDefinition Width="Auto" />         </Grid.ColumnDefinitions>         <TextBox x:Name="txtboxInputWord" Grid.Column="0" GotFocus="OnTextboxInputWordGotFocus" />         <Button x:Name="btnGo" Grid.Column="1" Click="OnButtonGoClick" >             <Button.Content>                 <Image Source="/images/button-go.png"/>             </Button.Content>         </Button>     </Grid>     <ScrollViewer Grid.Row="2" x:Name="scrollViewer">         <TextBlock  Margin="12,5,12,5"  x:Name="txtBlockWordMeaning" HorizontalAlignment="Stretch"                    VerticalAlignment="Stretch" TextWrapping="Wrap"                    FontSize="26" />     </ScrollViewer> </Grid> I have commented out the PageTitle as it occupies too much valuable space, and the ContentPanel is changed to contain three rows. First row contains the list picker control, second row contains the textbox and the button, and the third row contains a textblock within a scroll viewer. The designer will now be showing the final ui, Now go to MainPage.xaml.cs, and add the following namespace declarations, using Microsoft.Phone.Controls; using AonawareDictionaryClient.AonawareDictionaryService; using System.IO.IsolatedStorage; A class called DictServiceSoapClient would have been created for you in the background when you added a reference to the web service. This class functions as a wrapper to the services exported by the web service. All the web service functions that we saw at the start can be access through this class, or more precisely through an object of this class. Create a data member of type DictServiceSoapClient in the Mainpage class, and a function which initializes it, DictServiceSoapClient DictSvcClient = null; private DictServiceSoapClient GetDictServiceSoapClient() {     if (null == DictSvcClient)     {         DictSvcClient = new DictServiceSoapClient();     }     return DictSvcClient; } We have two major tasks remaining. First, when the application loads we need to populate the list picker with all the supported dictionaries and second, when the user enters a word and clicks on the arrow button we need to fetch the word’s meaning. Populating the List Picker In the OnNavigatingTo event of the MainPage, we call the DictionaryList() api. This can also be done in the OnLoading event handler of the MainPage; not sure if one has an advantage over the other. Here’s the code for OnNavigatedTo, protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) {     DictServiceSoapClient client = GetDictServiceSoapClient();     client.DictionaryListCompleted += new EventHandler<DictionaryListCompletedEventArgs>(OnGetDictionaryListCompleted);     client.DictionaryListAsync();     base.OnNavigatedTo(e); } Windows Phone 7 supports only async calls to web services. When we added a reference to the dictionary service, asynchronous versions of all the functions were generated automatically. So in the above function we register a handler to the DictionaryListCompleted event which will occur when the call to DictionaryList() gets a response from the server. Then we call the DictionaryListAsynch() function which is the async version of the DictionaryList() api. The result of this api will be sent to the handler OnGetDictionaryListCompleted(), void OnGetDictionaryListCompleted(object sender, DictionaryListCompletedEventArgs e) {     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     Dictionary[] listOfDictionaries;     if (e.Error == null)     {         listOfDictionaries = e.Result;         PopulateListPicker(listOfDictionaries, settings);     }     else if (settings.Contains("SavedDictionaryList"))     {         listOfDictionaries = settings["SavedDictionaryList"] as Dictionary[];         PopulateListPicker(listOfDictionaries, settings);     }     else     {         MessageBoxResult res = MessageBox.Show("An error occured while retrieving dictionary list, do you want to try again?", "Error", MessageBoxButton.OKCancel);         if (MessageBoxResult.OK == res)         {             GetDictServiceSoapClient().DictionaryListAsync();         }     }     settings.Save(); } I have used IsolatedStorageSettings to store a few things; the entire dictionary list and the dictionary that is selected when the user exits the application, so that the next time when the user starts the application the current dictionary is set to the last selected value. First we check if the api returned any error, if the error object is null e.Result will contain the list (actually array) of Dictionary type objects. If there was an error, we check the isolated storage settings to see if there is a dictionary list stored from a previous instance of the application and if so, we populate the list picker based on this saved list. Note that in this case there are chances that the dictionary list might be out of date if there have been changes on the server. Finally, if none of these cases are true, we display an error message to the user and try to fetch the list again. PopulateListPicker() is passed the array of Dictionary objects and the settings object as well, void PopulateListPicker(Dictionary[] listOfDictionaries, IsolatedStorageSettings settings) {     listPickerDictionaryList.Items.Clear();     foreach (Dictionary dictionary in listOfDictionaries)     {         listPickerDictionaryList.Items.Add(dictionary.Name);     }     settings["SavedDictionaryList"] = listOfDictionaries;     string savedDictionaryName;     if (settings.Contains("SavedDictionary"))     {         savedDictionaryName = settings["SavedDictionary"] as string;     }     else     {         savedDictionaryName = "WordNet (r) 2.0"; //default dictionary, wordnet     }     foreach (string dictName in listPickerDictionaryList.Items)     {         if (dictName == savedDictionaryName)         {             listPickerDictionaryList.SelectedItem = dictName;             break;         }     }     settings["SavedDictionary"] = listPickerDictionaryList.SelectedItem as string; } We first clear all the items from the list picker, add the dictionary names from the array and then create a key in the settings called SavedDictionaryList and store the dictionary list in it. We then check if there is saved dictionary available from a previous instance, if there is, we set it as the selected item in the list picker. And if not, we set “WordNet ® 2.0” as the default dictionary. Before returning, we save the selected dictionary in the “SavedDictionary” key of the isolated storage settings. Fetching word definitions Getting this part done is very similar to the above code. We get the input word from the textbox, call into DefineInDictAsync() to fetch the definition and when DefineInDictAsync completes, we get the result and display it in the textblock. Here is the handler for the button click, private void OnButtonGoClick(object sender, RoutedEventArgs e) {     txtBlockWordMeaning.Text = "Please wait..";     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     if (txtboxInputWord.Text.Trim().Length <= 0)     {         MessageBox.Show("Please enter a word in the textbox and press 'Go'");     }     else     {         Dictionary[] listOfDictionaries = settings["SavedDictionaryList"] as Dictionary[];         string selectedDictionary = listPickerDictionaryList.SelectedItem.ToString();         string dictId = "wn"; //default dictionary is wordnet (wn is the dict id)         foreach (Dictionary dict in listOfDictionaries)         {             if (dict.Name == selectedDictionary)             {                 dictId = dict.Id;                 break;             }         }         DictServiceSoapClient client = GetDictServiceSoapClient();         client.DefineInDictCompleted += new EventHandler<DefineInDictCompletedEventArgs>(OnDefineInDictCompleted);         client.DefineInDictAsync(dictId, txtboxInputWord.Text.Trim());     } } We validate the input and then select the dictionary id based on the currently selected dictionary. We need the dictionary id because the api DefineInDict() expects the dictionary identifier and not the dictionary name. We could very well have stored the dictionary id in isolated storage settings too. Again, same as before, we register a event handler for the DefineInDictCompleted event and call the DefineInDictAsync() method passing in the dictionary id and the input word. void OnDefineInDictCompleted(object sender, DefineInDictCompletedEventArgs e) {     WordDefinition wd = e.Result;     scrollViewer.ScrollToVerticalOffset(0.0f);     if (wd.Definitions.Length == 0)     {         txtBlockWordMeaning.Text = String.Format("No definitions were found for '{0}' in '{1}'", txtboxInputWord.Text.Trim(), listPickerDictionaryList.SelectedItem.ToString().Trim());     }     else     {         foreach (Definition def in wd.Definitions)         {             string str = def.WordDefinition;             str = str.Replace("  ", " "); //some formatting             txtBlockWordMeaning.Text = str;         }     } } When the api completes, e.Result will contain a WordDefnition object. This class is also generated in the background while adding the service reference. We check the word definitions within this class to see if any results were returned, if not, we display a message to the user in the textblock. If a definition was found the text on the textblock is set to display the definition of the word. Adding final touches, we now need to save the current dictionary when the application exits. A small but useful thing is selecting the entire word in the input textbox when the user selects it. This makes sure that if the user has looked up a definition for a really long word, he doesn’t have to press ‘clear’ too many times to enter the next word, protected override void OnNavigatingFrom(System.Windows.Navigation.NavigatingCancelEventArgs e) {     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     settings["SavedDictionary"] = listPickerDictionaryList.SelectedItem as string;     settings.Save();     base.OnNavigatingFrom(e); } private void OnTextboxInputWordGotFocus(object sender, RoutedEventArgs e) {     TextBox txtbox = sender as TextBox;     if (txtbox.Text.Trim().Length > 0)     {         txtbox.SelectionStart = 0;         txtbox.SelectionLength = txtbox.Text.Length;     } } OnNavigatingFrom() is called whenever you navigate away from the MainPage, since our application contains only one page that would mean that it is exiting. I leave you with a short video of the application in action, but before that if you have any suggestions on how to make the code better and improve it please do leave a comment. Until next time…

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • Upload Certificate and Key to RUEI in order to decrypt SSL traffic

    - by stefan.thieme(at)oracle.com
    So you want to monitor encrypted traffic with your RUEI collector ?Actually this is an easy thing if you follow the lines below...I will start out with creating a pair of snakeoil (so called self-signed) certificate and key with the make-ssl-cert tool which comes pre-packaged with apache only for the purpose of this example.$ sudo make-ssl-cert generate-default-snakeoil$ sudo ls -l /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key-rw-r--r-- 1 root root     615 2010-06-07 10:03 /etc/ssl/certs/ssl-cert-snakeoil.pem-rw-r----- 1 root ssl-cert 891 2010-06-07 10:03 /etc/ssl/private/ssl-cert-snakeoil.keyRUEI Configuration of Security SSL Keys You will most likely get these two files from your Certificate Authority (CA) and/or your system administrators should be able to extract this from your WebServer or LoadBalancer handling SSL encryption for your infrastructure.Now let's look at the content of these two files, the certificate (apache assumes this is in PEM format) is called a public key and the private key is used by the apache server to encrypt traffic for a client using the certificate to initiate the SSL connection with the server.In case you already know that these two match, you simply have to paste them in one text file and upload this text file to your RUEI instance.$ sudo cat /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key > /tmp/ruei.cert_and_key$ sudo cat /tmp/ruei.cert_and_key -----BEGIN CERTIFICATE----- MIIBmTCCAQICCQD7O3XXwVilWzANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDEwZ1 YnVudHUwHhcNMTAwNjA3MDgwMzUzWhcNMjAwNjA0MDgwMzUzWjARMQ8wDQYDVQQD EwZ1YnVudHUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALbs+JnI+p+K7Iqa SQZdnYBxOpdRH0/9jt1QKvmH68v81h9+f1Z2rVR7Zrd/l+ruE3H9VvuzxMlKuMH7 qBX/gmjDZTlj9WJM+zc0tSk+e2udy9he20lGzTxv0vaykJkuKcvSWNk4WE9NuAdg IHZvjKgoTSVmvM1ApMCg69nyOy97AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAk2rv VEkxR1qPSpJiudDuGUHtWKBKWiWbmSwI3REZT+0vG+YDG5a55NdxgRk3zhQntqF7 gNYjKxblBByBpY7W0ci00kf7kFgvXWMeU96NSQJdnid/YxzQYn0dGL2rSh1dwdPN NPQlNSfnEQ1yxFevR7aRdCqTbTXU3mxi8YaSscE= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQC27PiZyPqfiuyKmkkGXZ2AcTqXUR9P/Y7dUCr5h+vL/NYffn9W dq1Ue2a3f5fq7hNx/Vb7s8TJSrjB+6gV/4Jow2U5Y/ViTPs3NLUpPntrncvYXttJ Rs08b9L2spCZLinL0ljZOFhPTbgHYCB2b4yoKE0lZrzNQKTAoOvZ8jsvewIDAQAB AoGBAJ7LCWeeUwnKNFqBYmD3RTFpmX4furnal3lBDX0945BZtJr0WZ/6N679zIYA aiVTdGfgjvDC9lHy3n3uctRd0Jqdh2QoSSxNBhq5elIApNIIYzu7w/XI/VhGcDlA b6uadURQEC2q+M8YYjw3mwR2omhCWlHIViOHe/9T8jfP/8pxAkEA7k39WRcQildH DFKcj7gurqlkElHysacMTFWf0ZDTEUS6bdkmNXwK6mH63BlmGLrYAP5AMgKgeDf8 D+WRfv8YKQJBAMSCQ7UGDN3ysyfIIrdc1RBEAk4BOrKHKtD5Ux0z5lcQkaCYrK8J DuSldreN2yOhS99/S4CRWmGkTj04wRSnjwMCQQCaR5mW3QzTU4/m1XEQxsBKSdZE 2hMSmsCmhuSyK13Kl0FPLr/C7qyuc4KSjksABa8kbXaoKfUz/6LLs+ePXZ2JAkAv +mIPk5+WnQgS4XFgdYDrzL8HTpOHPSs+BHG/goltnnT/0ebvgXWqa5+1pyPm6h29 PrYveM2pY1Va6z1xDowDAkEAttfzAwAHz+FUhWQCmOBpvBuW/KhYWKZTMpvxFMSY YD5PH6NNyLfBx0J4nGPN5n/f6il0s9pzt3ko++/eUtWSnQ== -----END RSA PRIVATE KEY----- Simply click on the add new key and browse for the cert_and_key file on your desktop which you concatenated earlier using any text editor. You may need to add a passphrase in order to decrypt the RSA key in some cases (it should tell you BEGIN ENCRYPTED PRIVATE KEY in the header line). I will show you the success screen after uploading the certificate to RUEI. You may want to restart your collector once you have uploaded all the certificate/key pairs you want to use in order to make sure they get picked up asap.You should be able to see the number of SSL Connections rising in the Collector statistics screen below. The figures for decrypt errors should slowly go down and the usage figures for your encryption algortihm on the subsequent SSL Encryption screen should go up. You should be 100% sure everything works fine by now, otherwise see below to distinguish the remaining 1% from your 99% certainty.Verify Certificate and Key are matchingYou can compare the modulus of private key and public certificate and they should match in order for the key to fit the lock. You only want to make sure they both fit each other.We are actually interested only in the following details of the two files, which can be determined by using the -subject, -dates and -modulus command line switches instead of the complete -text output of the x509 certificate/rsa key contents.$ sudo openssl x509 -noout -subject -in /etc/ssl/certs/ssl-cert-snakeoil.pemsubject= /CN=ubuntu$ sudo openssl x509 -noout -dates -in /etc/ssl/certs/ssl-cert-snakeoil.pemnotBefore=Jun  7 08:03:53 2010 GMTnotAfter=Jun  4 08:03:53 2020 GMT$ sudo openssl x509 -noout -modulus -in /etc/ssl/certs/ssl-cert-snakeoil.pem Modulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7B $ sudo openssl rsa -noout -modulus -in /etc/ssl/private/ssl-cert-snakeoil.keyModulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7BAs you can see the modulus matches exactly and we have the proof that the certificate has been created using the private key. OpenSSL Certificate and Key DetailsAs I already told you, you do not need all the greedy details, but in case you want to know it in depth what is actually in those hex-blocks can be made visible with the following commands which show you the actual content in a human readable format.Note: You may not want to post all the details of your private key =^) I told you I have been using a self-signed certificate only for showing you these details.$ sudo openssl rsa -noout -text -in /etc/ssl/private/ssl-cert-snakeoil.keyPrivate-Key: (1024 bit)modulus:    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:    a4:c0:a0:eb:d9:f2:3b:2f:7bpublicExponent: 65537 (0x10001)privateExponent:    00:9e:cb:09:67:9e:53:09:ca:34:5a:81:62:60:f7:    45:31:69:99:7e:1f:ba:b9:da:97:79:41:0d:7d:3d:    e3:90:59:b4:9a:f4:59:9f:fa:37:ae:fd:cc:86:00:    6a:25:53:74:67:e0:8e:f0:c2:f6:51:f2:de:7d:ee:    72:d4:5d:d0:9a:9d:87:64:28:49:2c:4d:06:1a:b9:    7a:52:00:a4:d2:08:63:3b:bb:c3:f5:c8:fd:58:46:    70:39:40:6f:ab:9a:75:44:50:10:2d:aa:f8:cf:18:    62:3c:37:9b:04:76:a2:68:42:5a:51:c8:56:23:87:    7b:ff:53:f2:37:cf:ff:ca:71prime1:    00:ee:4d:fd:59:17:10:8a:57:47:0c:52:9c:8f:b8:    2e:ae:a9:64:12:51:f2:b1:a7:0c:4c:55:9f:d1:90:    d3:11:44:ba:6d:d9:26:35:7c:0a:ea:61:fa:dc:19:    66:18:ba:d8:00:fe:40:32:02:a0:78:37:fc:0f:e5:    91:7e:ff:18:29prime2:    00:c4:82:43:b5:06:0c:dd:f2:b3:27:c8:22:b7:5c:    d5:10:44:02:4e:01:3a:b2:87:2a:d0:f9:53:1d:33:    e6:57:10:91:a0:98:ac:af:09:0e:e4:a5:76:b7:8d:    db:23:a1:4b:df:7f:4b:80:91:5a:61:a4:4e:3d:38:    c1:14:a7:8f:03exponent1:    00:9a:47:99:96:dd:0c:d3:53:8f:e6:d5:71:10:c6:    c0:4a:49:d6:44:da:13:12:9a:c0:a6:86:e4:b2:2b:    5d:ca:97:41:4f:2e:bf:c2:ee:ac:ae:73:82:92:8e:    4b:00:05:af:24:6d:76:a8:29:f5:33:ff:a2:cb:b3:    e7:8f:5d:9d:89exponent2:    2f:fa:62:0f:93:9f:96:9d:08:12:e1:71:60:75:80:    eb:cc:bf:07:4e:93:87:3d:2b:3e:04:71:bf:82:89:    6d:9e:74:ff:d1:e6:ef:81:75:aa:6b:9f:b5:a7:23:    e6:ea:1d:bd:3e:b6:2f:78:cd:a9:63:55:5a:eb:3d:    71:0e:8c:03coefficient:    00:b6:d7:f3:03:00:07:cf:e1:54:85:64:02:98:e0:    69:bc:1b:96:fc:a8:58:58:a6:53:32:9b:f1:14:c4:    98:60:3e:4f:1f:a3:4d:c8:b7:c1:c7:42:78:9c:63:    cd:e6:7f:df:ea:29:74:b3:da:73:b7:79:28:fb:ef:    de:52:d5:92:9d$ sudo openssl x509 -noout -text -in /etc/ssl/certs/ssl-cert-snakeoil.pemCertificate:    Data:        Version: 1 (0x0)        Serial Number:            fb:3b:75:d7:c1:58:a5:5b        Signature Algorithm: sha1WithRSAEncryption        Issuer: CN=ubuntu        Validity            Not Before: Jun  7 08:03:53 2010 GMT            Not After : Jun  4 08:03:53 2020 GMT        Subject: CN=ubuntu        Subject Public Key Info:            Public Key Algorithm: rsaEncryption            RSA Public Key: (1024 bit)                Modulus (1024 bit):                    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:                    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:                    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:                    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:                    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:                    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:                    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:                    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:                    a4:c0:a0:eb:d9:f2:3b:2f:7b                Exponent: 65537 (0x10001)    Signature Algorithm: sha1WithRSAEncryption        93:6a:ef:54:49:31:47:5a:8f:4a:92:62:b9:d0:ee:19:41:ed:        58:a0:4a:5a:25:9b:99:2c:08:dd:11:19:4f:ed:2f:1b:e6:03:        1b:96:b9:e4:d7:71:81:19:37:ce:14:27:b6:a1:7b:80:d6:23:        2b:16:e5:04:1c:81:a5:8e:d6:d1:c8:b4:d2:47:fb:90:58:2f:        5d:63:1e:53:de:8d:49:02:5d:9e:27:7f:63:1c:d0:62:7d:1d:        18:bd:ab:4a:1d:5d:c1:d3:cd:34:f4:25:35:27:e7:11:0d:72:        c4:57:af:47:b6:91:74:2a:93:6d:35:d4:de:6c:62:f1:86:92:        b1:c1The above output can also be seen if you direct your browser client to your website and check the certificate sent by the server to your browser. You will be able to lookup all the details including the validity dates, subject common name and the public key modulus.Capture an SSL connection using WiresharkAnd as you would have expected, looking at the low-level tcp data that has been exchanged between the client and server with a tcp-diagnostics tool (i.e. wireshark/tcpdump) you can also see the modulus in there.These were the settings I used to capture all traffic on the local loopback interface, matching the filter expression: tcp and ip and host 127.0.0.1 and port 443. This tells Wireshark to leave out any other information, I may not have been interested in showing you.

    Read the article

  • The Windows Store... why did I sign up with this mess again?

    - by FransBouma
    Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8. As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt. Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application. Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important. So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately. I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks: Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?" A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible! After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me. "Ok fair enough". The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card. Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month. "So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job. Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know. Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was. Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet. I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure! I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though. Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough. To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system. As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework". While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time. Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses. Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed. Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years. I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

    Read the article

  • SOLVED mwfeedparser integrating in my app gives EXC_BAD_ACCESS (code=1, address=0xa0040008)

    - by Pranoy C
    SOLVED- Got it! The problem was that since I am creating the DoParsingStuff *parseThisUrl object in the viewDidLoad method, it's scope was only within that method. So after the method finished, the object got deallocated. I changed it to an instance variable instead and now it works. It gives a different error but that it an entirely different issue. Issue was: I have been struggling with trying to integrate the mwfeedparser library in my app for parsing RSS and ATOM feeds. It throws a gives EXC_BAD_ACCESS error which I can't seem to troubleshoot. //My Class looks like - My interface looks like: #import <Foundation/Foundation.h> #import "MWFeedParser.h" #import "NSString+HTML.h" @protocol ParseCompleted <NSObject> -(void)parsedArray:(NSMutableArray *)parsedArray; @end @interface DoParsingStuff : NSObject<MWFeedParserDelegate> @property (nonatomic,strong) NSMutableArray *parsedItems; @property (nonatomic, strong) NSArray *itemsToDisplay; @property (nonatomic,strong) MWFeedParser *feedParser; @property (nonatomic,strong) NSURL *feedurl; @property (nonatomic,strong) id <ParseCompleted> delegate; -(id)initWithFeedURL:(NSURL *)url; @end //And Implementaion: #import "DoParsingStuff.h" @implementation DoParsingStuff @synthesize parsedItems = _parsedItems; @synthesize itemsToDisplay = _itemsToDisplay; @synthesize feedParser = _feedParser; @synthesize feedurl=_feedurl; @synthesize delegate = _delegate; -(id)initWithFeedURL:(NSURL *)url{ if(self = [super init]){ _feedurl=url; _feedParser = [[MWFeedParser alloc] initWithFeedURL:_feedurl]; _feedParser.delegate=self; _feedParser.feedParseType=ParseTypeFull; _feedParser.connectionType=ConnectionTypeAsynchronously; } return self; } -(void)doParsing{ BOOL y = [_feedParser parse]; } # pragma mark - # pragma mark MWFeedParserDelegate - (void)feedParserDidStart:(MWFeedParser *)parser { //Just tells what url is being parsed e.g. http://www.wired.com/reviews/feeds/latestProductsRss NSLog(@"Started Parsing: %@", parser.url); } - (void)feedParser:(MWFeedParser *)parser didParseFeedInfo:(MWFeedInfo *)info { //What is the Feed about e.g. "Product Reviews" NSLog(@"Parsed Feed Info: “%@”", info.title); //self.title = info.title; } - (void)feedParser:(MWFeedParser *)parser didParseFeedItem:(MWFeedItem *)item { //Prints current element's title e.g. “An Arthropod for Your iDevices” NSLog(@"Parsed Feed Item: “%@”", item.title); if (item) [_parsedItems addObject:item]; } - (void)feedParserDidFinish:(MWFeedParser *)parser {//This is where you can do your own stuff with the parsed items NSLog(@"Finished Parsing%@", (parser.stopped ? @" (Stopped)" : @"")); [_delegate parsedArray:_parsedItems]; //[self updateTableWithParsedItems]; } - (void)feedParser:(MWFeedParser *)parser didFailWithError:(NSError *)error { NSLog(@"Finished Parsing With Error: %@", error); if (_parsedItems.count == 0) { //self.title = @"Failed"; // Show failed message in title } else { // Failed but some items parsed, so show and inform of error UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Parsing Incomplete" message:@"There was an error during the parsing of this feed. Not all of the feed items could parsed." delegate:nil cancelButtonTitle:@"Dismiss" otherButtonTitles:nil]; [alert show]; } //[self updateTableWithParsedItems]; } @end //I am calling this from my main viewcontroller as such: #import "DoParsingStuff.h" @interface ViewController : UIViewController <ParseCompleted> .... //And I have the following methods in my implementation: DoParsingStuff *parseThisUrl = [[DoParsingStuff alloc] initWithFeedURL:[NSURL URLWithString:@"http://www.theverge.com/rss/index.xml"]]; parseThisUrl.delegate=self; [parseThisUrl doParsing]; I have the method defined here as- -(void)parsedArray:(NSMutableArray *)parsedArray{ NSLog(@"%@",parsedArray); } //I stepped through breakpoints- When I try to go through the breakpoints, I see that everything goes fine till the very last [parseThisUrl doParsing]; in my delegate class. After that it starts showing me memory registers where I get lost. I think it could be due to arc as I have disabled arc on the mwfeedparser files but am using arc in the above classes. If you need the entire project for this, let me know. I tried it with NSZombies enabled and got a bit more info out of it: -[DoParsingStuff respondsToSelector:]: message sent to deallocated instance 0x6a52480 I am not using release/autorelease/retain etc. in this class...but it is being used in the mwfeedparser library.

    Read the article

  • CodePlex Daily Summary for Friday, December 31, 2010

    CodePlex Daily Summary for Friday, December 31, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...Windows Weibo all in one for Sina Sohu and QQ: WeiBee V0.1: WeiBee is an all in one Twitter tool, which can update Wei Bo at the same time for websites. It intends to support t.sohu.com, t.sina.com.cn and t.qq.com.cn. If you have WeiBo at SOHU, SINA and QQ, you can try this tool to help you save time to open all the webpages to update your status. For any business opportunity, such as put advertise on China Twitter market, and to build a custom WeiBo tool, please reach my email box qq1800@gmail.com My official WeiBo is http://mediaroom.t.sohu.comStyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...eCompany: eCompany v0.2.0 Build 63: Version 0.2.0 Build 63: Added Splash screen & about box Added downloading of currencies when eCompany launched for the first time (must close any bug caused by no currency rate existing) Added corp creation when eCompany launched for the first time (for now, you didn't need to edit the company.xml file manually) You just need to decompress file "eCompany v0.2.0.63.zip" into your current eCompany install directory.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)Analysis Services Stored Procedure Project: 1.3.5 Release: This release includes the following fixes and new functionality: Updates to GetCubeLastProcessedDate to work with perspectives Fixes to reports that call Discover functions improving drillthrough functions against perspectives improving ExecuteDrillthroughAndFixColumns logic fixing situation where MDX query calling certain ASSP sprocs which opened external connections caused deadlock to SSAS processing small fix to Partition code when DbColumnName property doesn't exist changes...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...Euro for Windows XP: ChangeRegionalSettings 1..0: *Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????New Projects1i2m3i4s5e6r7p: 1i2m3i4s5e6r7pA3 Fashion Web: A complete e-commerce siteAfonsoft Blog - Blog de teste: Blog para teste de desenvolvimento em MySQL ou SQLServer com ASP.NET 3.5AnyGrid for ASP.NET MVC: Which grid component should you use for your ASP.NET MVC project? How about all of them? AnyGrid makes it easy to switch between grid implementations, allowing a single action to, e.g., use two different grids for desktop and mobile views. It also supports DataAnnnotations.BizTalk Map Test Framework: The Map Test Framework makes it easier for BizTalk developers to test their maps. You'll no longer have to maintain a whole bunch of XML files for your tests. The use of template files and xpath queries to perform tests will increase your productivity tremendously.BlogResult.NET: A simple Blogging starter kit using S#harp Architecture. This project was originally course material sample code for an MVC class, but we decided to open source it and make it available to the community.Caro: Code demo caro game.Ecozany Skin for DotNetNuke: This Ecozany package contains three sample skins and a collection of containers for use in your DotNetNuke web sites.Enhanced lookup field: Enhanced lookup field makes it easier for end users to create filters. You'll no longer have to use a custom field or javascript to have a parent child lookup for example. It's developed in C#. - Parent/multiple child lookups - Advanced filters options - Easy to usejobglance: This is job siteMaxLeafWeb_K3: MaxLeafWeb_K3Miage Kart: Projet java M1msystem: MVC. Net web systemNetController: Controle sem teclas de direção, usando acelerômentro e .NET micro Framework.Pencils - A Blog Site framework: Another Blog site frameworkPICTShell: PICT is an efficient way to design test cases and test configurations for software systems. PICT Shell is the GUI for PICT.Portal de Solicitação de Serviços: O Portal de Solicitação de Serviços é uma solução que atende diversas empresas prestadores de serviços. Inicialmente desenvolvido para atender empresa de informática, mas o objetivo é que com a publicação do fonte, ele seja extendido à diversos tipos de prestadoras.Runery: Runery RSPSsCut4s: sCut4sSumacê Jogava Caxangá?: sumace makes it easier for gamers to play. You'll no longer have to play. It's developed in XNA.Supply_Chain_FInance: Supply Chain Finance thrives these years all over the world. In exploring the inner working mechanism ,we can sought to have a way to embed an information system in it. We develop the system to suit the domesticated supply chain finace .TestAmir: ??? ????? ??? ???The Cosmos: School project on a SOA based system (Loan system)Time zone: A c# library for manage time changes for any time zone in the world. Based on olson database.Validate: Validate is a collection of extension methods that let you add validations to any object and display validations in a user/developer friendly format. Its an extremely light weight validation framework with a zero learning curve.Windows Phone 7 Silverlight ListBox with CheckBox Control: A Windows Phone 7 ListBox control, providing CheckBoxes for item selection when in "Choose State" and no CheckBoxes in "Normal State", like the built-in Email app, with nice transition. To switch between states, set the "IsInChooseState" property. The control inherits ListBox.XQSOFT.WF.Designer: this is workflow designerzhanghai: my test project

    Read the article

  • Uploadify Minimum Image Width And Height

    - by Richard Knop
    So I am using the Uplodify plugin to allow users to upload multiple images at once. The problem is I need to set a minimum width and height for images. Let's say 150x150px is the smallest image users can upload. How can I set this limitation in the Uploadify plugin? When user tries to upload smaller picture, I would like to display some error message as well. Here is the PHP file that is called bu the plugin to upload images: <?php define('BASE_PATH', substr(dirname(dirname(__FILE__)), 0, -22)); // set the include path set_include_path(BASE_PATH . '/../library' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path()); // autoload classes from the library function __autoload($class) { include str_replace('_', '/', $class) . '.php'; } $configuration = new Zend_Config_Ini(BASE_PATH . '/application' . '/configs/application.ini', 'development'); $dbAdapter = Zend_Db::factory($configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($dbAdapter); function _getTable($table) { include BASE_PATH . '/application/modules/default/models/' . $table . '.php'; return new $table(); } $albums = _getTable('Albums'); $media = _getTable('Media'); if (false === empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $extension = end(explode('.', $_FILES['Filedata']['name'])); // insert temporary row into the database $data = array(); $data['type'] = 'photo'; $data['type2'] = 'public'; $data['status'] = 'temporary'; $data['user_id'] = $_REQUEST['user_id']; $paths = $media->add($data, $extension, $dbAdapter); // save the photo move_uploaded_file($tempFile, BASE_PATH . '/public/' . $paths[0]); // create a thumbnail include BASE_PATH . '/library/My/PHPThumbnailer/ThumbLib.inc.php'; $thumb = PhpThumbFactory::create(BASE_PATH . '/public/' . $paths[0]); $thumb->adaptiveResize(85, 85); $thumb->save(BASE_PATH . '/public/' . $paths[1]); // add watermark to the bottom right corner $pathToFullImage = BASE_PATH . '/public/' . $paths[0]; $size = getimagesize($pathToFullImage); switch ($extension) { case 'gif': $im = imagecreatefromgif($pathToFullImage); break; case 'jpg': $im = imagecreatefromjpeg($pathToFullImage); break; case 'png': $im = imagecreatefrompng($pathToFullImage); break; } if (false !== $im) { $white = imagecolorallocate($im, 255, 255, 255); $font = BASE_PATH . '/public/fonts/arial.ttf'; imagefttext($im, 13, // font size 0, // angle $size[0] - 132, // x axis (top left is [0, 0]) $size[1] - 13, // y axis $white, $font, 'HunnyHive.com'); switch ($extension) { case 'gif': imagegif($im, $pathToFullImage); break; case 'jpg': imagejpeg($im, $pathToFullImage, 100); break; case 'png': imagepng($im, $pathToFullImage, 0); break; } imagedestroy($im); } echo "1"; } And here's the javascript: $(document).ready(function() { $('#photo').uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : '/flash-uploader/scripts/upload-public-photo.php', 'cancelImg' : '/flash-uploader/cancel.png', 'scriptData' : {'user_id' : 'USER_ID'}, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152, 'fileExt' : '*.jpg;*.jpeg;*.gif;*.png', 'wmode' : 'transparent', 'onComplete' : function() { $.get('/my-account/temporary-public-photos', function(data) { $('#temporaryPhotos').html(data); }); } }); $('#upload_public_photo').hover(function() { var titles = '{'; $('.title').each(function() { var title = $(this).val(); if ('Title...' != title) { var id = $(this).attr('name'); id = id.substr(5); title = jQuery.trim(title); if (titles.length > 1) { titles += ','; } titles += '"' + id + '"' + ':"' + title + '"'; } }); titles += '}'; $('#titles').val(titles); }); }); Now bear in mind that I know how to check images dimensions in the PHP file. But I'm not sure how to modify the javascript so it won't upload images with very small dimensions.

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • Driving me INSANE: Unable to Retrieve Metadata for

    - by Loren
    I've been spending the past 3 days trying to fix this problem I'm encountering - it's driving me insane... I'm not quite sure what is causing this bug - here are the details: MVC4 + Entity Framework 4.4 + MySql + POCO/Code First I'm setting up the above configuration .. here are my classes: namespace BTD.DataContext { public class BTDContext : DbContext { public BTDContext() : base("name=BTDContext") { } protected override void OnModelCreating(DbModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); //modelBuilder.Conventions.Remove<System.Data.Entity.Infrastructure.IncludeMetadataConvention>(); } public DbSet<Product> Products { get; set; } public DbSet<ProductImage> ProductImages { get; set; } } } namespace BTD.Data { [Table("Product")] public class Product { [Key] public long ProductId { get; set; } [DisplayName("Manufacturer")] public int? ManufacturerId { get; set; } [Required] [StringLength(150)] public string Name { get; set; } [Required] [DataType(DataType.MultilineText)] public string Description { get; set; } [Required] [StringLength(120)] public string URL { get; set; } [Required] [StringLength(75)] [DisplayName("Meta Title")] public string MetaTitle { get; set; } [DataType(DataType.MultilineText)] [DisplayName("Meta Description")] public string MetaDescription { get; set; } [Required] [StringLength(25)] public string Status { get; set; } [DisplayName("Create Date/Time")] public DateTime CreateDateTime { get; set; } [DisplayName("Edit Date/Time")] public DateTime EditDateTime { get; set; } } [Table("ProductImage")] public class ProductImage { [Key] public long ProductImageId { get; set; } public long ProductId { get; set; } public long? ProductVariantId { get; set; } [Required] public byte[] Image { get; set; } public bool PrimaryImage { get; set; } public DateTime CreateDateTime { get; set; } public DateTime EditDateTime { get; set; } } } Here is my web.config setup... <connectionStrings> <add name="BTDContext" connectionString="Server=localhost;Port=3306;Database=btd;User Id=root;Password=mypassword;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> The database AND tables already exist... I'm still pretty new with mvc but was using this tutorial The application builds fine.. however when I try to add a controller using Product (BTD.Data) as my model class and BTDContext (BTD.DataContext) as my data context class I receive the following error: Unable to retrieve metadata for BTD.Data.Product using the same DbCompiledModel to create context against different types of database servers is not supported. Instead, create a separate DbCompiledModel for each type of server being used. I am at a complete loss - I've scoured google with almost every different variation of that error message above I can think of but to no avail. Here are the things i can verify... MySql is working properly I'm using MySql Connector version 6.5.4 and have created other ASP.net web forms + entity framework applications with ZERO problems I have also tried including/removing this in my web.config: <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient"/> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.5.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> I've literally been working on this bug for days - I'm to the point now that I would be willing to pay someone to solve it.. no joke... I'd really love to use MVC 4 and Razor - I was so excited to get started on this, but now i'm pretty discouraged - I truly appreciate any help/guidance on this! Also note - i'm using Entityframework from Nuget... Another Note I was using the default visual studio template that creates your MVC project with the account pages and other stuff. I JUST removed all references to the added files because they were trying to use the "DefaultConnection" which didn't exist - so i thought those files may be what was causing the error - however still no luck after removing them - I just wanted to let everyone know i'm using the visual studio MVC project template which pre-creates a bunch of files. I will be trying to recreate this all from a blank MVC project which doesn't have those files - i will update this once i test that Other References It appears someone else is having the same issues I am - the only difference is they are using sql server - I tried tweaking all my code to follow the suggestions on this stackoverflow question/answer here but still to no avail

    Read the article

  • Why isn't TextBox.Text in WPF animatable?

    - by cplotts
    Ok, I have just run into something that is really catching me off-guard. I was helping a fellow developer with a couple of unrelated questions and in his project he was animating text into some TextBlock(s). So, I went back to my desk and recreated the project (in order to answer his questions), but I accidentally used TextBox instead of TextBlock. My text wasn't animating at all! (A lot of help, I was!) Eventually, I figured out that his xaml was using TextBlock and mine was using TextBox. What is interesting, is that Blend wasn't creating key frames when I was using TextBox. So, I got it to work in Blend using TextBlock(s) and then modified the xaml by hand, converting the TextBlock(s) into TextBox(es). When I ran the project, I got the following error: InvalidOperationException: '(0)' Storyboard.TargetProperty path contains nonanimatable property 'Text'. Well, it seems as if Blend was smart enough to know that ... and not generate the key frames in the animation (it would just modify the value directly on the TextBox). +1 for Blend. So, the question became: why isn't TextBox.Text animatable? The usual answer is that the particular property you are animating isn't a DependencyProperty. But, this isn't the case, TextBox.Text is a DependencyProperty. So, now I am bewildered! Why can't you animate TextBox.Text? Let me include some xaml to illustrate the problem. The following xaml works ... but uses TextBlock(s). <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="TextBoxTextQuestion.MainWindow" x:Name="Window" Title="MainWindow" Width="640" Height="480" > <Window.Resources> <Storyboard x:Key="animateTextStoryboard"> <StringAnimationUsingKeyFrames Storyboard.TargetProperty="(TextBlock.Text)" Storyboard.TargetName="textControl"> <DiscreteStringKeyFrame KeyTime="0:0:1" Value="Goodbye"/> </StringAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource animateTextStoryboard}"/> </EventTrigger> </Window.Triggers> <Grid x:Name="LayoutRoot"> <StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center"> <TextBlock x:Name="textControl" Text="Hello" FontFamily="Calibri" FontSize="32"/> <TextBlock Text="World!" Margin="0,25,0,0" FontFamily="Calibri" FontSize="32"/> </StackPanel> </Grid> </Window> The following xaml does not work and uses TextBox.Text: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="TextBoxTextQuestion.MainWindow" x:Name="Window" Title="MainWindow" Width="640" Height="480" > <Window.Resources> <Storyboard x:Key="animateTextStoryboard"> <StringAnimationUsingKeyFrames Storyboard.TargetProperty="(TextBox.Text)" Storyboard.TargetName="textControl"> <DiscreteStringKeyFrame KeyTime="0:0:1" Value="Goodbye"/> </StringAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource animateTextStoryboard}"/> </EventTrigger> </Window.Triggers> <Grid x:Name="LayoutRoot"> <StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center"> <TextBox x:Name="textControl" Text="Hello" FontFamily="Calibri" FontSize="32"/> <TextBox Text="World!" Margin="0,25,0,0" FontFamily="Calibri" FontSize="32"/> </StackPanel> </Grid> </Window>

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • CodePlex Daily Summary for Saturday, April 07, 2012

    CodePlex Daily Summary for Saturday, April 07, 2012Popular ReleasesHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.30: changes NEW: Added Support for "DirectUpload.net" links NEW: Added Support for "PixRoute.com" links NEW: Added Support for "ImagePicasa.com" links FIXED: "PixHub.eu" linksCommunity TFS Build Extensions: April 2012: Release notes to follow...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...ExtAspNet: ExtAspNet v3.1.2: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-04 v3.1.2 -??IE?????????????BUG(??"about:blank"?...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.New ProjectsAdvertising Management: Ph?n m?m qu?n lý qu?ng cáoAgile Compact Database: It is database for all. AssemblyTransformer: AssemblyTransformer is a tool for modifying .NET assemblies using Mono Cecil. It handles the entire transformation process including strong name signing and offers a simple command-line interface and a basic framework for creating and configuring specific transformations.Cafe For You: Ph?n m?m gi?i thi?u và qu?n lý quán cafeClient-side Templated Script Control: Allows a developer to add a repeater-style templated list control to a web page that will be data bound client-side, and may respond to client events. The control may be data bound by a web service call on initialization, and may also have it's data source set via client code.CRM Project - Beginner Sample: Sample to help beginners to start in C# development. Ejemplo para ayudar a quienes inician con el desarrollo en C#.Deployment Made Easy: The goal of this project is to make deployments to windows servers easy using the web deployment toolEasyCMS: EasyCMSExcel to SQL Server Database Bulk Transfer: Quick and simple WPF tool to allow users export data from an Excel spreadsheet to a SQL Server database table. Provided as is. But if you need any help let me know. HTML Client demo for WCF RIA Services: Demo application with HTML client (upshot.js + knockout.js) on WCF RIA ServicesKOI: Kinect Open Interface: Kinect Open Interface, KOI, provides a way to detect and have the user confirm 11 gestures for your UI. Please read my blog for info: http://www.kinecthelp.com/2012/04/koi-kinect-open-interface.htmlLazyWinAdmin: LazyWinAdmin is a Powershell script to manage local or remote machine ressources.LCDSmartie dll to display Audio spectrum on Windows 7: An LCDSmartie plugin that displays anything being played as an audio spectrum.LiveHelpChatApp: With Live chat help you can provide online / Offline help to your client it has facebook style chat for online and offline users Download and EnjoyMailSender: Small tool for sending mail messages contains multiple attachements with sum size bigger than allowed size. You can drag'n'drop attachments and click send - application split all attachments to parts and sent it separately. There is not address book yet. Mauricio: Mauricio Lima PageMiddleware and Enterprise services foundation: Define a model of deployment and management for Middleware and enterprise applicationsMyFirstPro: This is a test projOld Games Launcher: Old Games Launcher is a combined DosBox frontend & a Direct Draw game/application starter.Pharmakos Studio: Pharmakos Studio is an extensible IDE. It was originally written specifically as an UnrealScript editor for the UDK, however it is being written so that any language can be supported via plugins.Proyecto Eclipse-Android: Proyectos con Eclipse-AdroidProyectos II: Proyecto para Farmaciapullsource: pull source directsource filterQuizzer: Awesome program for quizzes and tests.Solution Settings for Visual Studio: Solution Settings for Visual Studio allows a file containing settings such as formatting, fonts and colors to be included with a project. When the solution is opened, these settings are automatically applied, and when it is closed, the changes are reverted.sundance: test test testWebcomic Reader: A little Idea for an on-, and offline usable, touch-friendly Windows 7 Webcomic Reader.WinRT PathTextBlock: WinRT PathTextBlock is a control that overcomes some of the limitations in the built in WinRT TextBlock, such as not being able to outline the text, and not being able to distort the text, for example to draw it along a circle. Previously, you could use a tool like Expression Design to create the text and export it as a Path, but this wouldn't work for text that needed to be specified at run time. This control allows you to specify the Text property and it will generate the proper Path obj...Yaplex open source projects: Yaplex open source projects????API SDK-VB6(oauth2): ????API SDK-VB6(oauth2)????????API SDK VB6: ??????????API SDK vb?

    Read the article

  • Crash when trying to get NSManagedObject from NSFetchedResultsController after 25 objects?

    - by Jeremy
    Hey everyone, I'm relatively new to Core Data on iOS, but I think I've been getting better with it. I've been experiencing a bizarre crash, however, in one of my applications and have not been able to figure it out. I have approximately 40 objects in Core Data, presented in a UITableView. When tapping on a cell, a UIActionSheet appears, presenting the user with a UIActionSheet with options related to the cell that was selected. So that I can reference the selected object, I declare an NSIndexPath in my header called "lastSelection" and do the following when the UIActionSheet is presented: // Each cell has a tag based on its row number (i.e. first row has tag 0) lastSelection = [NSIndexPath indexPathForRow:[sender tag] inSection:0]; NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; BOOL onDuty = [[managedObject valueForKey:@"onDuty"] boolValue]; UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@"Status" delegate:self cancelButtonTitle:nil destructiveButtonTitle:nil otherButtonTitles:nil]; if(onDuty) { [actionSheet addButtonWithTitle:@"Off Duty"]; } else { [actionSheet addButtonWithTitle:@"On Duty"]; } actionSheet.actionSheetStyle = UIActionSheetStyleBlackOpaque; // Override the typical UIActionSheet behavior by presenting it overlapping the sender's frame. This makes it more clear which cell is selected. CGRect senderFrame = [sender frame]; CGPoint point = CGPointMake(senderFrame.origin.x + (senderFrame.size.width / 2), senderFrame.origin.y + (senderFrame.size.height / 2)); CGRect popoverRect = CGRectMake(point.x, point.y, 1, 1); [actionSheet showFromRect:popoverRect inView:[sender superview] animated:NO]; [actionSheet release]; When the UIActionSheet is dismissed with a button, the following code is called: - (void)actionSheet:(UIActionSheet *)actionSheet willDismissWithButtonIndex:(NSInteger)buttonIndex { // Set status based on UIActionSheet button pressed if(buttonIndex == -1) { return; } NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; if([actionSheet.title isEqualToString:@"Status"]) { if([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:@"On Duty"]) { [managedObject setValue:[NSNumber numberWithBool:YES] forKey:@"onDuty"]; [managedObject setValue:@"onDuty" forKey:@"status"]; } else { [managedObject setValue:[NSNumber numberWithBool:NO] forKey:@"onDuty"]; [managedObject setValue:@"offDuty" forKey:@"status"]; } } NSError *error; [self.managedObjectContext save:&error]; [tableView reloadData]; } This might not be the most efficient code (sorry, I'm new!), but it does work. That is, for the first 25 items in the list. Selecting the 26th item or beyond, the UIActionSheet will appear, but if it is dismissed with a button, I get a variety of errors, including any one of the following: [__NSCFArray section]: unrecognized selector sent to instance 0x4c6bf90 Program received signal: “EXC_BAD_ACCESS” [_NSObjectID_48_0 section]: unrecognized selector sent to instance 0x4c54710 [__NSArrayM section]: unrecognized selector sent to instance 0x4c619a0 [NSComparisonPredicate section]: unrecognized selector sent to instance 0x6088790 [NSKeyPathExpression section]: unrecognized selector sent to instance 0x4c18950 If I comment out NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; it doesn't crash anymore, so I believe it has something do do with that. Can anyone offer any insight? Please let me know if I need to include any other information. Thanks! EDIT: Interestingly, my fetchedResultsController code returns a different object every time. Is this expected, or could this be a cause of my issue? The code looks like this: - (NSFetchedResultsController *)fetchedResultsController { /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Employee" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; // Set the batch size to a suitable number. [fetchRequest setFetchBatchSize:80]; // Edit the sort key as appropriate. NSString *sortKey; BOOL ascending; if(sortControl.selectedSegmentIndex == 0) { sortKey = @"startTime"; ascending = YES; } else if(sortControl.selectedSegmentIndex == 1) { sortKey = @"name"; ascending = YES; } else { sortKey = @"onDuty"; ascending = NO; } NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:sortKey ascending:ascending]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:self.managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; if (![fetchedResultsController_ performFetch:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ //NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return fetchedResultsController_; } This happens when I set a breakpoint: (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x61567c0> (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x4c83630>

    Read the article

  • How to count each digit in a range of integers?

    - by Carlos Gutiérrez
    Imagine you sell those metallic digits used to number houses, locker doors, hotel rooms, etc. You need to find how many of each digit to ship when your customer needs to number doors/houses: 1 to 100 51 to 300 1 to 2,000 with zeros to the left The obvious solution is to do a loop from the first to the last number, convert the counter to a string with or without zeros to the left, extract each digit and use it as an index to increment an array of 10 integers. I wonder if there is a better way to solve this, without having to loop through the entire integers range. Solutions in any language or pseudocode are welcome. Edit: Answers review John at CashCommons and Wayne Conrad comment that my current approach is good and fast enough. Let me use a silly analogy: If you were given the task of counting the squares in a chess board in less than 1 minute, you could finish the task by counting the squares one by one, but a better solution is to count the sides and do a multiplication, because you later may be asked to count the tiles in a building. Alex Reisner points to a very interesting mathematical law that, unfortunately, doesn’t seem to be relevant to this problem. Andres suggests the same algorithm I’m using, but extracting digits with %10 operations instead of substrings. John at CashCommons and phord propose pre-calculating the digits required and storing them in a lookup table or, for raw speed, an array. This could be a good solution if we had an absolute, unmovable, set in stone, maximum integer value. I’ve never seen one of those. High-Performance Mark and strainer computed the needed digits for various ranges. The result for one millon seems to indicate there is a proportion, but the results for other number show different proportions. strainer found some formulas that may be used to count digit for number which are a power of ten. Robert Harvey had a very interesting experience posting the question at MathOverflow. One of the math guys wrote a solution using mathematical notation. Aaronaught developed and tested a solution using mathematics. After posting it he reviewed the formulas originated from Math Overflow and found a flaw in it (point to Stackoverflow :). noahlavine developed an algorithm and presented it in pseudocode. A new solution After reading all the answers, and doing some experiments, I found that for a range of integer from 1 to 10n-1: For digits 1 to 9, n*10(n-1) pieces are needed For digit 0, if not using leading zeros, n*10n-1 - ((10n-1) / 9) are needed For digit 0, if using leading zeros, n*10n-1 - n are needed The first formula was found by strainer (and probably by others), and I found the other two by trial and error (but they may be included in other answers). For example, if n = 6, range is 1 to 999,999: For digits 1 to 9 we need 6*105 = 600,000 of each one For digit 0, without leading zeros, we need 6*105 – (106-1)/9 = 600,000 - 111,111 = 488,889 For digit 0, with leading zeros, we need 6*105 – 6 = 599,994 These numbers can be checked using High-Performance Mark results. Using these formulas, I improved the original algorithm. It still loops from the first to the last number in the range of integers, but, if it finds a number which is a power of ten, it uses the formulas to add to the digits count the quantity for a full range of 1 to 9 or 1 to 99 or 1 to 999 etc. Here's the algorithm in pseudocode: integer First,Last //First and last number in the range integer Number //Current number in the loop integer Power //Power is the n in 10^n in the formulas integer Nines //Nines is the resut of 10^n - 1, 10^5 - 1 = 99999 integer Prefix //First digits in a number. For 14,200, prefix is 142 array 0..9 Digits //Will hold the count for all the digits FOR Number = First TO Last CALL TallyDigitsForOneNumber WITH Number,1 //Tally the count of each digit //in the number, increment by 1 //Start of optimization. Comments are for Number = 1,000 and Last = 8,000. Power = Zeros at the end of number //For 1,000, Power = 3 IF Power 0 //The number ends in 0 00 000 etc Nines = 10^Power-1 //Nines = 10^3 - 1 = 1000 - 1 = 999 IF Number+Nines <= Last //If 1,000+999 < 8,000, add a full set Digits[0-9] += Power*10^(Power-1) //Add 3*10^(3-1) = 300 to digits 0 to 9 Digits[0] -= -Power //Adjust digit 0 (leading zeros formula) Prefix = First digits of Number //For 1000, prefix is 1 CALL TallyDigitsForOneNumber WITH Prefix,Nines //Tally the count of each //digit in prefix, //increment by 999 Number += Nines //Increment the loop counter 999 cycles ENDIF ENDIF //End of optimization ENDFOR SUBROUTINE TallyDigitsForOneNumber PARAMS Number,Count REPEAT Digits [ Number % 10 ] += Count Number = Number / 10 UNTIL Number = 0 For example, for range 786 to 3,021, the counter will be incremented: By 1 from 786 to 790 (5 cycles) By 9 from 790 to 799 (1 cycle) By 1 from 799 to 800 By 99 from 800 to 899 By 1 from 899 to 900 By 99 from 900 to 999 By 1 from 999 to 1000 By 999 from 1000 to 1999 By 1 from 1999 to 2000 By 999 from 2000 to 2999 By 1 from 2999 to 3000 By 1 from 3000 to 3010 (10 cycles) By 9 from 3010 to 3019 (1 cycle) By 1 from 3019 to 3021 (2 cycles) Total: 28 cycles Without optimization: 2,235 cycles Note that this algorithm solves the problem without leading zeros. To use it with leading zeros, I used a hack: If range 700 to 1,000 with leading zeros is needed, use the algorithm for 10,700 to 11,000 and then substract 1,000 - 700 = 300 from the count of digit 1. Benchmark and Source code I tested the original approach, the same approach using %10 and the new solution for some large ranges, with these results: Original 104.78 seconds With %10 83.66 With Powers of Ten 0.07 A screenshot of the benchmark application: If you would like to see the full source code or run the benchmark, use these links: Complete Source code (in Clarion): http://sca.mx/ftp/countdigits.txt Compilable project and win32 exe: http://sca.mx/ftp/countdigits.zip Accepted answer noahlavine solution may be correct, but l just couldn’t follow the pseudo code, I think there are some details missing or not completely explained. Aaronaught solution seems to be correct, but the code is just too complex for my taste. I accepted strainer’s answer, because his line of thought guided me to develop this new solution.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • Django app that can provide user friendly, multiple / mass file upload functionality to other apps

    - by hopla
    Hi, I'm going to be honest: this is a question I asked on the Django-Users mailinglist last week. Since I didn't get any replies there yet, I'm reposting it on Stack Overflow in the hope that it gets more attention here. I want to create an app that makes it easy to do user friendly, multiple / mass file upload in your own apps. With user friendly I mean upload like Gmail, Flickr, ... where the user can select multiple files at once in the browse file dialog. The files are then uploaded sequentially or in parallel and a nice overview of the selected files is shown on the page with a progress bar next to them. A 'Cancel' upload button is also a possible option. All that niceness is usually solved by using a Flash object. Complete solutions are out there for the client side, like: SWFUpload http://swfupload.org/ , FancyUpload http://digitarald.de/project/fancyupload/ , YUI 2 Uploader http://developer.yahoo.com/yui/uploader/ and probably many more. Ofcourse the trick is getting those solutions integrated in your project. Especially in a framework like Django, double so if you want it to be reusable. So, I have a few ideas, but I'm neither an expert on Django nor on Flash based upload solutions. I'll share my ideas here in the hope of getting some feedback from more knowledgeable and experienced people. (Or even just some 'I want this too!' replies :) ) You will notice that I make a few assumptions: this is to keep the (initial) scope of the application under control. These assumptions are of course debatable: All right, my idea's so far: If you want to mass upload multiple files, you are going to have a model to contain each file in. I.e. the model will contain one FileField or one ImageField. Models with multiple (but ofcourse finite) amount of FileFields/ ImageFields are not in need of easy mass uploading imho: if you have a model with 100 FileFields you are doing something wrong :) Examples where you would want my envisioned kind of mass upload: An app that has just one model 'Brochure' with a file field, a title field (dynamically created from the filename) and a date_added field. A photo gallery app with models 'Gallery' and 'Photo'. You pick a Gallery to add pictures to, upload the pictures and new Photo objects are created and foreign keys set to the chosen Gallery. It would be nice to be able to configure or extend the app for your favorite Flash upload solution. We can pick one of the three above as a default, but implement the app so that people can easily add additional implementations (kinda like Django can use multiple databases). Let it be agnostic to any particular client side solution. If we need to pick one to start with, maybe pick the one with the smallest footprint? (smallest download of client side stuff) The Flash based solutions asynchronously (and either sequentially or in parallel) POST the files to a url. I suggest that url to be local to our generic app (so it's the same for every app where you use our app in). That url will go to a view provided by our generic app. The view will do the following: create a new model instance, add the file, OPTIONALLY DO EXTRA STUFF and save the instance. DO EXTRA STUFF is code that the app that uses our app wants to run. It doesn't have to provide any extra code, if the model has just a FileField/ImageField the standard view code will do the job. But most app will want to do extra stuff I think, like filling in the other fields: title, date_added, foreignkeys, manytomany, ... I have not yet thought about a mechanism for DO EXTRA STUFF. Just wrapping the generic app view came to mind, but that is not developer friendly, since you would have to write your own url pattern and your own view. Then you have to tell the Flash solutions to use a new url etc... I think something like signals could be used here? Forms/Admin: I'm still very sketchy on how all this could best be integrated in the Admin or generic Django forms/widgets/... (and this is were my lack of Django experience shows): In the case of the Gallery/Photo app: You could provide a mass Photo upload widget on the Gallery detail form. But what if the Gallery instance is not saved yet? The file upload view won't be able to set the foreignkeys on the Photo instances. I see that the auth app, when you create a user, first asks for username and password and only then provides you with a bigger form to fill in emailadres, pick roles etc. We could do something like that. In the case of an app with just one model: How do you provide a form in the Django admin to do your mass upload? You can't do it with the detail form of your model, that's just for one model instance. There's probably dozens more questions that need to be answered before I can even start on this app. So please tell me what you think! Give me input! What do you like? What not? What would you do different? Is this idea solid? Where is it not? Thank you!

    Read the article

  • IOC Container Handling State Params in Non-Default Constructor

    - by Mystagogue
    For the purpose of this discussion, there are two kinds of parameters an object constructor might take: state dependency or service dependency. Supplying a service dependency with an IOC container is easy: DI takes over. But in contrast, state dependencies are usually only known to the client. That is, the object requestor. It turns out that having a client supply the state params through an IOC Container is quite painful. I will show several different ways to do this, all of which have big problems, and ask the community if there is another option I'm missing. Let's begin: Before I added an IOC container to my project code, I started with a class like this: class Foobar { //parameters are state dependencies, not service dependencies public Foobar(string alpha, int omega){...}; //...other stuff } I decide to add a Logger service depdendency to the Foobar class, which perhaps I'll provide through DI: class Foobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } But then I'm also told I need to make class Foobar itself "swappable." That is, I'm required to service-locate a Foobar instance. I add a new interface into the mix: class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } When I make the service locator call, it will DI the ILogger service dependency for me. Unfortunately the same is not true of the state dependencies Alpha and Omega. Some containers offer a syntax to address this: //Unity 2.0 pseudo-ish code: myContainer.Resolve<IFoobar>( new parameterOverride[] { {"alpha", "one"}, {"omega",2} } ); I like the feature, but I don't like that it is untyped and not evident to the developer what parameters must be passed (via intellisense, etc). So I look at another solution: //This is a "boiler plate" heavy approach! class Foobar : IFoobar { public Foobar (string alpha, int omega){...}; //...stuff } class FoobarFactory : IFoobarFactory { public IFoobar IFoobarFactory.Create(string alpha, int omega){ return new Foobar(alpha, omega); } } //fetch it... myContainer.Resolve<IFoobarFactory>().Create("one", 2); The above solves the type-safety and intellisense problem, but it (1) forced class Foobar to fetch an ILogger through a service locator rather than DI and (2) it requires me to make a bunch of boiler-plate (XXXFactory, IXXXFactory) for all varieties of Foobar implementations I might use. Should I decide to go with a pure service locator approach, it may not be a problem. But I still can't stand all the boiler-plate needed to make this work. So then I try this: //code named "concrete creator" class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; static IFoobar Create(string alpha, int omega){ //unity 2.0 pseudo-ish code. Assume a common //service locator, or singleton holds the container... return Container.Resolve<IFoobar>( new parameterOverride[] {{"alpha", alpha},{"omega", omega} } ); } //Get my instance: Foobar.Create("alpha",2); I actually don't mind that I'm using the concrete "Foobar" class to create an IFoobar. It represents a base concept that I don't expect to change in my code. I also don't mind the lack of type-safety in the static "Create", because it is now encapsulated. My intellisense is working too! Any concrete instance made this way will ignore the supplied state params if they don't apply (a Unity 2.0 behavior). Perhaps a different concrete implementation "FooFoobar" might have a formal arg name mismatch, but I'm still pretty happy with it. But the big problem with this approach is that it only works effectively with Unity 2.0 (a mismatched parameter in Structure Map will throw an exception). So it is good only if I stay with Unity. The problem is, I'm beginning to like Structure Map a lot more. So now I go onto yet another option: class Foobar : IFoobar, IFoobarInit { public Foobar(ILogger log){...}; public IFoobar IFoobarInit.Initialize(string alpha, int omega){ this.alpha = alpha; this.omega = omega; return this; } } //now create it... IFoobar foo = myContainer.resolve<IFoobarInit>().Initialize("one", 2) Now with this I've got a somewhat nice compromise with the other approaches: (1) My arguments are type-safe / intellisense aware (2) I have a choice of fetching the ILogger via DI (shown above) or service locator, (3) there is no need to make one or more seperate concrete FoobarFactory classes (contrast with the verbose "boiler-plate" example code earlier), and (4) it reasonably upholds the principle "make interfaces easy to use correctly, and hard to use incorrectly." At least it arguably is no worse than the alternatives previously discussed. One acceptance barrier yet remains: I also want to apply "design by contract." Every sample I presented was intentionally favoring constructor injection (for state dependencies) because I want to preserve "invariant" support as most commonly practiced. Namely, the invariant is established when the constructor completes. In the sample above, the invarient is not established when object construction completes. As long as I'm doing home-grown "design by contract" I could just tell developers not to test the invariant until the Initialize(...) method is called. But more to the point, when .net 4.0 comes out I want to use its "code contract" support for design by contract. From what I read, it will not be compatible with this last approach. Curses! Of course it also occurs to me that my entire philosophy is off. Perhaps I'd be told that conjuring a Foobar : IFoobar via a service locator implies that it is a service - and services only have other service dependencies, they don't have state dependencies (such as the Alpha and Omega of these examples). I'm open to listening to such philosophical matters as well, but I'd also like to know what semi-authorative reference to read that would steer me down that thought path. So now I turn it to the community. What approach should I consider that I havn't yet? Must I really believe I've exhausted my options?

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >