Search Results

Search found 1504 results on 61 pages for 'nearly lunchtime'.

Page 58/61 | < Previous Page | 54 55 56 57 58 59 60 61  | Next Page >

  • Wrong background colors in Swing ListCellRenderer

    - by Johannes Rössel
    I'm currently trying to write a custom ListCellRenderer for a JList. Unfortunately, nearly all examples simply use DefaultListCellRenderer as a JLabel and be done with it; I needed a JPanel, however (since I need to display a little more info than just an icon and one line of text). Now I have a problem with the background colors, specifically with the Nimbus PLAF. Seemingly the background color I get from list.getBackground() is white, but paints as a shade of gray (or blueish gray). Outputting the color I get yields the following: Background color: DerivedColor(color=255,255,255 parent=nimbusLightBackground offsets=0.0,0.0,0.0,0 pColor=255,255,255 However, as can be seen, this isn't what gets painted. It obviously works fine for the selected item. Currently I even have every component I put into the JPanel the cell renderer returns set to opaque and with the correct foreground and background colors—to no avail. Any ideas what I'm doing wrong here? ETA: Example code which hopefully runs. public class ParameterListCellRenderer implements ListCellRenderer { @Override public Component getListCellRendererComponent(JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { // some values we need Border border = null; Color foreground, background; if (isSelected) { background = list.getSelectionBackground(); foreground = list.getSelectionForeground(); } else { background = list.getBackground(); foreground = list.getForeground(); } if (cellHasFocus) { if (isSelected) { border = UIManager.getBorder("List.focusSelectedCellHighlightBorder"); } if (border == null) { border = UIManager.getBorder("List.focusCellHighlightBorder"); } } else { border = UIManager.getBorder("List.cellNoFocusBorder"); } System.out.println("Background color: " + background.toString()); JPanel outerPanel = new JPanel(new BorderLayout()); setProperties(outerPanel, foreground, background); outerPanel.setBorder(border); JLabel nameLabel = new JLabel("Factory name here"); setProperties(nameLabel, foreground, background); outerPanel.add(nameLabel, BorderLayout.PAGE_START); Box innerPanel = new Box(BoxLayout.PAGE_AXIS); setProperties(innerPanel, foreground, background); innerPanel.setAlignmentX(Box.LEFT_ALIGNMENT); innerPanel.setBorder(BorderFactory.createEmptyBorder(0, 10, 0, 0)); JLabel label = new JLabel("param: value"); label.setFont(label.getFont().deriveFont( AffineTransform.getScaleInstance(0.95, 0.95))); setProperties(label, foreground, background); innerPanel.add(label); outerPanel.add(innerPanel, BorderLayout.CENTER); return outerPanel; } private void setProperties(JComponent component, Color foreground, Color background) { component.setOpaque(true); component.setForeground(foreground); component.setBackground(background); } } The weird thing is, if I do if (isSelected) { background = new Color(list.getSelectionBackground().getRGB()); foreground = new Color(list.getSelectionForeground().getRGB()); } else { background = new Color(list.getBackground().getRGB()); foreground = new Color(list.getForeground().getRGB()); } it magically works. So maybe the DerivedColor with nimbusLightBackground I'm getting there may have trouble?

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • CodePlex Daily Summary for Wednesday, May 07, 2014

    CodePlex Daily Summary for Wednesday, May 07, 2014Popular ReleasesSeal Report: Seal Report 1.4: New Features: Report Designer: New option to convert a Report Source into a Repository Source. Report Designer: New contextual helper menus to select, remove, copy, prompt elements in a model. Web Server: New option to expand sub-folders displayed in the tree view. Web Server: Web Folder Description File can be a .cshtml file to display a MVC View. Views: additional CSS parameters for some DIVs. NVD3 Chart: Some default configuration values have been changed. Issues Addressed:16 ...Magick.NET: Magick.NET 6.8.9.002: Magick.NET linked with ImageMagick 6.8.9.0.VidCoder: 1.5.22 Beta: Added ability to burn SRT subtitles. Updated to HandBrake SVN 6169. Added checks to prevent VidCoder from running with a database version newer than it expects. Tooltips in the Advanced Video panel now trigger on the field labels as well as the fields themselves. Fixed updating preset/profile/tune/level settings on changing video encoder. This should resolve some problems with QSV encoding. Fixed tunes and profiles getting set to blank when switching between x264 and x265. Fixed co...NuGet: NuGet 2.8.2: We will be releasing a 2.8.2 version of our own NuGet packages and the NuGet.exe command-line tool. The 2.8.2 release will not include updated VS or WebMatrix extensions. NuGet.Server.Extensions.dll needs to be used alongside NuGet-Signed.exe to provide the NuGet.exe mirror functionality.DNN CMS Platform: 07.03.00 BETA (Not For Production Use): DNN 7.3 release is focused on performance and we have made a variety of changes to improve the run-time characteristics of the platform. End users will notice faster page response time and administrators will appreciate a more responsive user experience, especially on larger scale web sites. This is a BETA release and is NOT recommended for production use. There is no upgrade path offered from this release to the final DNN 7.3 release. Known Issues - The Telerik RAD Controls for ASP.NET AJA...babelua: 1.5.3: V1.5.3 - 2014.5.6Stability improvement: support relative path when debugging lua files; improve variable view function in debugger; fix a bug that when a file is loaded at runtime , its breakpoints would not take effect; some other bug fix; New feature: improve tool windows color scheme to look better with different vs themes; comment/uncomment code block easily; some other improve;CTI Text Encryption: CTI Text Encryption 5.0: Improve encryption algorithm. Simplify all-non encryption related mechanism (Simplify user interface)SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 2.0.2: SmartStore.NET 2.0.2 is primarily a maintenance release for version 2.0.0, which has been released on April 04 2014. It contains several improvements & important fixes. BugfixesIMPORTANT FIX: Memory leak leads to OutOfMemoryException in application after a while Installation fix: some varchar(MAX) columns get created as varchar(4000). Added a migration to fix the column specs. Installation fix: Setup fails with exception Value cannot be null. Parameter name: stream Bugfix for stock iss...Channel9's Absolute Beginner Series: Windows Phone 8.1: Entire source code for Windows Phone 8.1 Absolute Beginner Series.BIDS Helper: BIDS Helper 1.6.6: This BIDS Helper beta release brings support for SQL Server 2014 and SSDTBI for Visual Studio 2013. (Note that SSDTBI for Visual Studio 2013 is currently unavailable to download from Microsoft. We are releasing BIDS Helper support to help those who downloaded it before it became unavailable, and we will recheck BIDS Helper 2014 is compatible after SSDTBI becomes available to download again.) BIDS Helper 2014 Beta Limitations: SQL Server 2014 support for Biml is still in progress, so this bet...Windows Phone IsoStoreSpy (a cool WP8.1 + WP8 Isolated Storage Explorer): IsoStoreSpy WP8.1 3.0.0.0 (Win8 only): 3.0.0.0 + WP8.1 & WP8 device allowed + Local, Roaming or Temp directory Selector for WindowsRuntime apps + Version number in the title :)CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.24.0: ShortcutMapping panel now allows direct modification of the shortcuts. Custom updater dropped support for MSI installation but it still allows downloading of MSI http://www.csscript.net/npp/codeplex/non_msi_update.pngProDinner - ASP.NET MVC Sample (EF5, N-Tier, jQuery): 8.2: upgrade to mvc 5 upgrade to ASP.net MVC Awesome 4.0, EF 6.1, latest jcrop, Windsor Castle etc. ability to show/hide the dog using tinymce for the feedback page mobile friendly uiASP.net MVC Awesome - jQuery Ajax Helpers: 4.0: version 4.0 ========================== - added InitPopup, InitPopupForm helpers, open initialized popups using awe.open(name, params) - added Tag or Tags for all helpers, usable in mod code - added popup aweclose, aweresize events - popups have api accessible via .data('api'), with methods open, close and destroy - all popups have .awe-popup class - popups can be grouped using .Group(string), only 1 popup in the same group can exist at the same time - added awebeginload event for...SEToolbox: SEToolbox 01.028.008 Release 2: Fixes for drag-drop copying. Added ship joiner, to rejoin a broken ship! Installation of this version will replace older version.ScreenToGif: Release 1.0: What's new: • Small UI tweaks everywhere. • You can add Text, Subtitles and Title Frames. • Processing the gif now takes less time. • Languages: Spanish, Italian and Tamil added. • Single .exe multi language. • Takes less time to apply blur and pixelated efect. • Restart button. • Language picker. (If the user wants to select a diferent language than the system's) • "Encoding Finished" page with a link to open the file. • GreenScreen unchanged pixels to save kilobytes. • "Check for Updates" t...Touchmote: Touchmote 1.0 beta 11: Changes Support for multiple monitor setup Additional settings for analog sticks More reliable pairing Bug fixes and improvementsPowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.2: Added Get-IniValue / Set-IniValue functions. Replaces Get-IniContent / Set-IniContent which were a bit unwieldy. The new functions are much easier to use. Added -Value parameter to Get-RegistryKey so that a specific registry value can be retrieved instead of the entire Key as an object Fixed Test-Battery to work with machines that can have multiple batteries Fixed issue where BlockExecution would fail if a Custom Temp Path was specified in the config file Updated examples with latest ...Microsoft Script Analyzer: Microsoft Script Analyzer 1.1: The Script Analyzer scans your current PowerShell script code against some PowerShell best practice rules, and provide suggestions to improve the script quality and readability. By double-clicking the checking result, the relevant script code will be highlighted in the code editor. Script Analyzer result You can configure the Script Analyzer rules in the Settings window of Script Analyzer. Script Analyzer settings The current release includes 5 PowerShell best practice rules. Invoke-...Microsoft Script Browser: Script Browser 1.1: Script Browser for Windows PowerShell ISE was designed and developed in response to many IT Pros’ and MVPs’ feedback during the MVP Global Summit. It puts nearly 10K script examples at IT Pros fingertips when they write scripts to automate their IT tasks. Users can search, learn, download and manage scripts from within their scripting environment - PowerShell ISE - with just a few button clicks. It saves the time of switching back and forth between webpages and scripting environment, and also...New ProjectsAOP.Net: this project is used to implement AOP in .NetARTYKLES: Project articles ASP.NET Identity 2.0 Azure Table Storage: This project provides a high performance cloud solution for ASP.NET Identity 2.0 using Azure Table storage replacing the Entity Framework / MSSQL provider.Associativy Taxonomies Adapter: Administration module for the Associativy (http://associativy.com) Orchard graph platform.BancoSim: BancoSim inicioChaosKit: Chaos theory based time series analysis library, calculating Lyapunov and Hurst exponents and making iterated predictions.ElectrodomesticosAbr2014: Proyecto de ElectrodomesticosGPM: general privilege systemLNTest: LNTestLocal Electrodomésticos 001: Se tiene un local de electrodomésticos que tiene intensión de salir a online. MVC - Filtered TextBox: The project contains MVC Html helper methods for rendering a filtered text box that allows the user to enter only a given set of characters.MvcTrialer: A simple class for .NET 4 MVC to redirect pages if users exceeded a trial period date.NotificationsExtensions.Portable: Portable Class Library Version of the NotificationsExtensions NuGet Packages. Used to Create WinRT/Windows Phone Notification XML.OBD-II: Cross platform library for communicating with OBD-II adapters.Powershell Analyze Robocopy Logs and mail its results: It analyzes a path with Robocopy Log files. It'll create a summary file with information about all log files and it'll mail its results.Proyecto Electrodomesticos: proyecto electrodomesticosPrueba1: asdPVT Library: PVT library for calculating the thermodynamic properties of fluidQuanLyPhongMachThuY: Project serve for .NET CourcseRAMED: project that give some functionalities to ramed projects management like permissions and capabilities to make the management of this type of projects easierSimple Contacts: Simple Contacts Directory application built with ASP.NET MVC and Web API that perform basic CRUD (create, read, update, delete) operations.Stream oriented base64 encoder and decoder: Very simple implementation of stream oriented base64 encoder and decoder, which uses standard .NET Stream decorators. Super Casa de Electronica: no se q escribirTiny Deduplicator: Tiny Deduplicator is a file deduplicator which can scan for duplicate files, and allows the user to control which duplicates they are going to keep or recycle.webdemo: DemonstrationWindows Embedded Compact Edition 2013 File Transfer: A console app for transferring files using TCPIP between Windows Embedded Compact devices, and between them and the desktop. Compact 2013 and Desktop versions.Windows Tile Updater: An simple example of using the Microsoft Universal App to update a tile on the start screen. This was created to accompany a series of blog posts.WMI.NET: WMI.NET Demo Project form Win32 ClassesWumper Thumpers: The ultimate sweg

    Read the article

  • [NSCustomView isOpaque]: message sent to deallocated instance 0x123456

    - by JxXx
    Hi all I receive that message in debugger console since I added the following arguments for debugging my application with XCode. NSZombieEnabled: YES NSZombieLevel: 16 I was looking for zombie objects... Before doing so, the application failed before I could know where what and why was happening.... Now I´m pretty sure that 'something' outside code is trying to access an object previously released and I can't know where or why it happens neither where it was released... My application is based on this proof of concept (very interesting and colorful) of QuartzCore Framework: http://www.cimgf.com/2008/03/03/core-animation-tutorial-wizard-dialog-with-transitions/ Based on it, I added a few more nsviews to my project and a title and an index to each one, also I added some buttons, text and images depending on what 'dialog' (ACLinkedView object) it was... The transition from an ACLinkedView object to another is going through a validation that depends on the view where you are ... As you see I used this proof of concept as the foundation of my application and it grew and grew into an application that makes use of configuration files, web services (using gSOAP and C ...) I hope you can give me some clues to where is my error ... I´ve been the hole week debugging unsuccessfully, as I said before, I think that that message comes from a point outside my code. I'd say that the problem s related with bad memory allocation or automatisms (nearly completely unknowns for me) during loading the nib components... I will try to explain all this with parts of mycode. This is my ACLinkedView definition: #import <Cocoa/Cocoa.h> @interface ACLinkedView : NSView { // The Window (to close it if needed) IBOutlet NSWindow *mainWindow; // Linked Views IBOutlet ACLinkedView *previousView; IBOutlet ACLinkedView *nextView; // Buttons IBOutlet NSButton *previousButton; IBOutlet NSButton *nextButton; IBOutlet NSButton *helpButton; //It has to be a Button!! IBOutlet NSImageView *bannerImg; NSString *sName; int iPosition; } - (void) SetName: (NSString*) Name; - (void) SetPosition: (int) Position; - (NSString*) GetName; - (int) GetPosition; - (void) windowWillClose:(NSNotification*)aNotification; @property (retain) NSWindow *mainWindow; @property (retain) ACLinkedView *previousView, *nextView; @property (retain) NSButton *previousButton, *nextButton, *helpButton; @property (retain) NSImageView *bannerImg; @property (retain) NSString *sName; @end The ACLinkedView's AwakeFromNib is this: #import <Cocoa/Cocoa.h> @interface ACLinkedView : NSView { // The Window (to close it if needed) IBOutlet NSWindow *mainWindow; // Linked Views IBOutlet ACLinkedView *previousView; IBOutlet ACLinkedView *nextView; // Buttons IBOutlet NSButton *previousButton; IBOutlet NSButton *nextButton; IBOutlet NSButton *helpButton; //It has to be a Button!! IBOutlet NSImageView *bannerImg; NSString *sName; int iPosition; } - (void) SetName: (NSString*) Name; - (void) SetPosition: (int) Position; - (NSString*) GetName; - (int) GetPosition; - (void) windowWillClose:(NSNotification*)aNotification; @property (retain) NSWindow *mainWindow; @property (retain) ACLinkedView *previousView, *nextView; @property (retain) NSButton *previousButton, *nextButton, *helpButton; @property (retain) NSImageView *bannerImg; @property (retain) NSString *sName; @end (As you can see the initialization of each ACLinkedView object depends on it's position wich is seted up into the Interface Builder by linking actions, buttons and CustomViews... Does I explain enough? Do you think that I should put more of my code here, i.e. AppDelegate definition or it´s awakeFromNib method? Can you help me in any way? Thanks in advance. Juan

    Read the article

  • Did I find a bug in PHP's `crypt()`?

    - by Nathan Long
    I think I may have found a bug in PHP's crypt() function under Windows. However: I recognize that it's probably my fault. PHP is used by millions and worked on by thousands; my code is used by tens and worked on by me. (This argument is best explained on Coding Horror.) So I'm asking for help: show me my fault. I've been trying to find it for a few days now, with no luck. The setup I'm using a Windows server installation with Apache 2.2.14 (Win32) and PHP 5.3.2. My development box runs Windows XP Professional; the 'production' server (this is an intranet setup) runs Windows Storage Server 2003. The problem happens on both. I don't see anything in php.ini related to crypt(), but will happily answer questions about my config. The problem Several scripts in my PHP app occasionally hang: the page sits there on 'waiting for localhost' and never finishes. Each of these scripts uses crypt to hash a user's password before storing it in the database, or, in the case of the login page, to hash the entered password before comparing it to the version stored in the database. Since the login page is the simplest, I focused on it for testing. I repeatedly logged in, and found that it would hang maybe 4 out of 10 times. As an experiment, I changed the login page to use the plain text password and changed my password in the database to its plain text version. The page stopped hanging. I saw that PHP's latest version lists this bugfix: Fixed bug #51059 (crypt crashes when invalid salt are [sic] given). So I created a very simple test script, as follows, using the same salt given in an official example: $foo = crypt('rasmuslerdorf','r1'); echo $foo; This page, too, will hang, if I reload it like crazy. I only see it hanging in Chrome, but regardless of browser, the effect on Apache is the same. Effect on Apache When these pages hang, Apache's server-status page (which I explained here, regarding a different problem) increments the number of requests being processed and decrements the number of idle workers. The requests being processed almost all have a status of 'Sending Reply,' though sometimes for a moment they will show either 'Reading request' or 'keepalive (read).' Eventually, Apache may crash. When it does, the Windows crash report looks like this: szAppName: httpd.exe szAppVer: 2.2.14.0 szModName: php5ts.dll szModVer: 5.3.1.0 // OK, this report was before I upgraded to PHP 5.3.2, // but that didn't fix it offset: 00a2615 Is it my fault? I'm tempted to file a bug report to PHP on this. The argument against it is, as stated above, that bugs are nearly always my fault. However, my argument in favor of 'it's PHP's fault' is: I'm using Windows, whereas most servers use Linux (I don't get to choose this), so the chances are greater that I've found an edge case There was recently a bug with crypt(), so maybe it still has issues I have made the simplest test case I can, and I still have the problem Can anyone duplicate this? Can you suggest where I've gone wrong? Should I file the bug after all? Thanks in advance for any help you may give.

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • CodePlex Daily Summary for Saturday, July 27, 2013

    CodePlex Daily Summary for Saturday, July 27, 2013Popular ReleasesSharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.10: - Added support for RAR Decryption (thanks to https://github.com/hrasyid) - Embedded some BouncyCastle crypto classes to allow RAR Decryption and Winzip AES Decryption in Portable and Windows Store DLLs - Built in Release (I think)Memory Teaser Game: Full Release 1.1.0: -> Fixed Memory leak issue. -> Restart game button issue. -> Added Splash screen. -> Changed Release Icon. This is the version 1.1.0.0VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginOfflineBrowser: Preview Release: This is a preview release so that others can help me find bugs. This should be pretty stable, but any bugs found should be reported here as an Issue.Home Access Plus+: v9.4.0727: Released to allow you to disable secure LDAP queriesOpen Source Job board: Version X3: Full version of job board, didn't have monies to fund it so it's free.DSeX DragonSpeak eXtended Editor: Version 1.0.116.0726: Cleaned up Wizard Interface Added Functionality for RTF UndoRedo IE Inserting Text from Wizard output to the Tabbed Editor Added Sanity Checks to Search/Replace Dialog to prevent crashes Fixed Template and Paste undoredo Fix Undoredo Blank spots Added New_FileTag Const = "(New FIle)" Added Filename to Modified FileClose queries (Thanks Lothus Marque)Math.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...Microsoft .NET Gadgeteer: .NET Gadgeteer Core 2.43.800: The .NET Gadgeteer Core installer includes the core libraries and end user project templates for Microsoft .NET Gadgeteer. This is a prerequisite for end users to build and deploy .NET Gadgeteer projects. It includes a project template wizard in the New Project dialog in Visual Studio 2012 or 2010 (or express versions) under the Gadgeteer tab - ".NET Gadgeteer Application". This template uses a graphical designer built for Visual Studio which allows end users to visually configure .NET Gadget...FogBugzPd - Project Dashboard For FogBugz: 1.0: First public release of FogBugzPd. Zip File includes web application. Requires: IIS 7+ Sql Server 2008/2012 or Sql Server Express 2012 .NET 4.5Open Url Rewriter for DotNetNuke: Open Url Rewriter Core 0.4.3 (Beta): bug fix for removing home page New Tab with rules count for each Portal with memory use estimation OpenUrlRewriter_00.04.03_Install.zip : for dnn 6.01 to 7.06 OpenUrlRewriter71_00.04.03_Install.zip : for dnn 7.1KerbalAlarmClock: v2.5.0.0 Release: Version 2.5.0.0 Recompiled it for 0.21 Fixed some issues with Hyperbolic orbits and AN/DN NodesAJAX Control Toolkit: July 2013 Release: AJAX Control Toolkit Release Notes - July 2013 Release Version 7.0725July 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...MJP's DirectX 11 Samples: Specular Antialiasing Sample: Sample code to complement my presentation that's part of the Physically Based Shading in Theory and Practice course at SIGGRAPH 2013, entitled "Crafting a Next-Gen Material Pipeline for The Order: 1886". Demonstrates various methods of preventing aliasing from specular BRDF's when using high-frequency normal maps. The zip file contains source code as well as a pre-compiled x64 binary.Kartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.GoAgent GUI: GoAgent GUI 1.3.5 Alpha (20130723): ????????Alpha?,???????????,?????????????。 ??????????GoAgent???(???phus lu?GitHub??????GoAgent??????,??????????????????) ????????????????????????Bug ?????????。??????????????。 ????issue????,????????,????????????????。LogicCircuit: LogicCircuit 2.13.07.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionYou can make visual elements of the circuit been visible on its symbols. This way you can build composite displays, keyboards and reuse them. Please read about displays for more details http://ww...LINQ to Twitter: LINQ to Twitter v2.1.08: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.AcDown?????: AcDown????? v4.4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.4.3 ?? ??Bilibili????????????? ???????????? ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2.10?...Magick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6. These zip files are also available as a NuGet package: https://nuget.org/profiles/dlemstra/New Projectsagree grammar engineering environment: agree is a concurrent parse/generation engine, a .NET implementation of the DELPH-IN joint reference formalism for natural language analysis. AWF's Utility Library: A collection of awf utilities C# chat client with server: Newest chat with client and server.Charming components for Windows Phone: Build Charming apps for Windows Phone. Adds ready-made Search, Share and Settings functionality to Windows Phone. Share more code across platforms.Darkorbit Configuration Manager: config manager dark orbit darkorbit eltepvpers heaven 'Heaven. configuration manager configurationmanagerDarkorbit MultiTool: dark orbit darkorbit multi tool mulitool heaven 'Heaven. elitepvpers awesome skylab bot trade bot techcenter bot multiaccount diabind: Python binding of DIA (Debug Interface Access) SDKDoodle .Net Connector: This library allows an easy access to the Doodle REST API.DssCECB Version 2.0: aaaeCommunity: e-communityEFDemo: This is a demo for Entity frameworkEmployee Directory Webpart in SharePoint 2010: Employee Directory for SharePoint 2010EntityContext: A lightweight wrapper around Entity Framework allowing for accessing some internals of the framework, as well as, some functionality usually required.EwsRelentless: This is a sample application which demonstrates you might place a heavy load of EWS calls against an Exchange server in order to test performance. FogBugzPd - Project Dashboard For FogBugz: This tool helps PMs, Tech Leads and Executives to see actual progress on the projects tracked in FogBugz.HP Battery Health Scan Script: Silently runs HP Battery Check Utility and saves result to an Access DB.I am following: Users can follow nearly everything in SharePoint 2013. This solution provides a list of followed contend where ever you need it in you SharePoint.Image Viewer: Simple image viewer written for academic purposes LIBRERIA PRISA: sistema de ventasLiduv: Facilitate teachers daily work including creating marks and lesson drafts, curricula and calculating marks for written tests etc.Maze Builder Library: A builder for random maze generationPayPal Express Checkout for nopCommerce: nopCommerce plugin to allow for PayPal Express Checkout. Full integration with shipping options.PsTest - UnitTesting for PowerShell: PsTest is a lightweight UnitTesting Module for use in PowerShell. It lets PowerShell users create, discover and run UnitTests in PowerShell.SIGE: sistema integral gestion educativaSql Mass Dumper: Sql Mass Dumper. A simple project to dump all the data in your SQL Server in XML or in JSON format.SSIS Wait Task: A SSIS task which suspends execution for a time period or until a specific time. Additionally a sql statement can be defined that can also delay execution.Tactical Combat - Unity3d Game: A first person shooter! The main character is Billy Hills, need a story plot.The open source customer relation and billing software.: Memtem in a open source customer relation and billing software written in C#.NET.Tridion Gateway Service: WCF Support for the Tridion TOM COM APIWINJS CTK: WinJS Control kit is a set of custom WINJS controls that are not supported by Windows 8.

    Read the article

  • Rails unknown action suddenly everywhere

    - by Joe
    The weird thing is that my app was working perfectly on Sat, and when I check it out on Monday (after doing nothing to it) I kept getting this problem: This behaviour is only happening on my production server. When I try to login or create a new user or do something that interacts with a form I am getting an unknown action error. A simple retrieval of rows does not throw this error however. I don't have all CRUD operations in most of my controllers because it's not necessary - but Rails always looks for the one that doesn't exist - it seams so anyway. If I make a mistake in the form that would normally throw a validation message to the user it will throw this error too, does that mean it has something to do with the model too (I'm not too Rails experienced and didn't know if that would be the case or not)? This is a general error I am getting - I have super_exception_notifier gem installed, so that's what all the extra params are. Processing SessionsController#new (for OMITTED at 2010-04-12 09:11:12) [GET] Rendering template within layouts/application Rendering sessions/new Completed in 3ms (View: 2, DB: 0) | 200 OK [http://OMITTED.com/session/new] Processing SessionsController#show (for OMITTED at 2010-04-12 09:11:14) [GET] ActionController::UnknownAction (No action responded to show. Actions: create, destroy, error_class_status_codes, error_class_status_codes=, error_layout, error_layout=, exception_notifiable_notification_level, exception_notifiable_notification_level=, exception_notifiable_silent_exceptions, exception_notifiable_silent_exceptions=, exception_notifiable_verbose, exception_notifiable_verbose=, http_status_codes, http_status_codes=, and new): dragonfly (0.5.3) lib/dragonfly/middleware.rb:13:in `call' passenger (2.2.9) lib/phusion_passenger/rack/request_handler.rb:92:in `process_request' passenger (2.2.9) lib/phusion_passenger/abstract_request_handler.rb:207:in `main_loop' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:400:in `start_request_handler' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:351:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/utils.rb:184:in `safe_fork' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:349:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `__send__' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `main_loop' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:163:in `start' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:209:in `start' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:262:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:126:in `lookup_or_add' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:256:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:80:in `synchronize' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:255:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:154:in `spawn_application' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:287:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `__send__' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `main_loop' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' This is what one of my forms looks like (nothing special) <% form_tag session_path do -%> <p><%= label_tag 'Username' %><br /> <%= text_field_tag 'login', @login %></p> <p><%= label_tag 'password' %><br/> <%= password_field_tag 'password', nil %></p> <p><%= label_tag 'remember_me', 'Remember me' %> <%= check_box_tag 'remember_me', '1', @remember_me %></p> <p><%= submit_tag 'Log in' %></p> <% end -%> It looks like dragonfly is the culprit doesn't it, here's the section from the gem files it says is being naughty: module Dragonfly class Middleware def initialize(app, dragonfly_app_name) @app = app @dragonfly_app_name = dragonfly_app_name end def call(env) response = endpoint.call(env) if response[0] == 404 13 -->> @app.call(env) else response end end I don't know what goes on behind the scenes here so I probably haven't been looking in the right place to fix this issue. Like I said it only throws this in a production environment, which guess is what the 'env' variable is referencing. Thank you for your time! I've spent nearly my whole day trying to figure this out! :(

    Read the article

  • CodePlex Daily Summary for Thursday, May 08, 2014

    CodePlex Daily Summary for Thursday, May 08, 2014Popular ReleasesOneNote Tagging Kit: OneNoteTaggingKit Add-In v2.3.5241.24721: Bugfix Release First time add-in registration problem fixed (issue 973 ). Was introduced by changing installer technology. Page tags retrieved from page meta element rather than outline to avoid round-trip encoding issues and avoid tags getting out of sync if someone edits the tags in the outline. Fixed issue with popup which showed an incorrect message for a short timeAST - a tool for exploring windows kernel: Ast-0.1-win8.1x86: Ast-0.1-win8.1x86Ghostscript.NET: Ghostscript.NET v.1.1.8.: 1.1.8. fixed incompatibility problem with 'gsapisetarg_encoding' function in Ghostscript releases prior to 9.10. (this function was introduced in 9.10 release) fixed older versions incompatibility problem with '-dMaxBitmap=1g' switch bugfix which in some cases turns on text antialiasing for Ghostscript 9.14 added better initialization checking 1.1.7. implemented Ghostscript native library verification with a friendly error message that will clear out the confusion when used native Ghosts...SimCityPak: SimCityPak 0.3.0.0: Contains several bugfixes, newly identified properties and some UI improvements. Main new features UI overhaul for the main index list: Icons for each different index, including icons for different property files Tooltips for all relevant fields Removed clutter Identified hundreds of additional properties (thanks to MaxisGuillaume) - this should make modding gameplay easierSeal Report: Seal Report 1.4: New Features: Report Designer: New option to convert a Report Source into a Repository Source. Report Designer: New contextual helper menus to select, remove, copy, prompt elements in a model. Web Server: New option to expand sub-folders displayed in the tree view. Web Server: Web Folder Description File can be a .cshtml file to display a MVC View. Views: additional CSS parameters for some DIVs. NVD3 Chart: Some default configuration values have been changed. Issues Addressed:16 ...Magick.NET: Magick.NET 6.8.9.002: Magick.NET linked with ImageMagick 6.8.9.0.VidCoder: 1.5.22 Beta: Added ability to burn SRT subtitles. Updated to HandBrake SVN 6169. Added checks to prevent VidCoder from running with a database version newer than it expects. Tooltips in the Advanced Video panel now trigger on the field labels as well as the fields themselves. Fixed updating preset/profile/tune/level settings on changing video encoder. This should resolve some problems with QSV encoding. Fixed tunes and profiles getting set to blank when switching between x264 and x265. Fixed co...NuGet: NuGet 2.8.2: We will be releasing a 2.8.2 version of our own NuGet packages and the NuGet.exe command-line tool. The 2.8.2 release will not include updated VS or WebMatrix extensions. NuGet.Server.Extensions.dll needs to be used alongside NuGet-Signed.exe to provide the NuGet.exe mirror functionality.SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 2.0.2: SmartStore.NET 2.0.2 is primarily a maintenance release for version 2.0.0, which has been released on April 04 2014. It contains several improvements & important fixes. BugfixesIMPORTANT FIX: Memory leak leads to OutOfMemoryException in application after a while Installation fix: some varchar(MAX) columns get created as varchar(4000). Added a migration to fix the column specs. Installation fix: Setup fails with exception Value cannot be null. Parameter name: stream Bugfix for stock iss...Channel9's Absolute Beginner Series: Windows Phone 8.1: Entire source code for Windows Phone 8.1 Absolute Beginner Series.BIDS Helper: BIDS Helper 1.6.6: This BIDS Helper beta release brings support for SQL Server 2014 and SSDTBI for Visual Studio 2013. (Note that SSDTBI for Visual Studio 2013 is currently unavailable to download from Microsoft. We are releasing BIDS Helper support to help those who downloaded it before it became unavailable, and we will recheck BIDS Helper 2014 is compatible after SSDTBI becomes available to download again.) BIDS Helper 2014 Beta Limitations: SQL Server 2014 support for Biml is still in progress, so this bet...Windows Phone IsoStoreSpy (a cool WP8.1 + WP8 Isolated Storage Explorer): IsoStoreSpy WP8.1 3.0.0.0 (Win8 only): 3.0.0.0 + WP8.1 & WP8 device allowed + Local, Roaming or Temp directory Selector for WindowsRuntime apps + Version number in the title :)CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.24.0: ShortcutMapping panel now allows direct modification of the shortcuts. Custom updater dropped support for MSI installation but it still allows downloading of MSI http://www.csscript.net/npp/codeplex/non_msi_update.pngProDinner - ASP.NET MVC Sample (EF5, N-Tier, jQuery): 8.2: upgrade to mvc 5 upgrade to ASP.net MVC Awesome 4.0, EF 6.1, latest jcrop, Windsor Castle etc. ability to show/hide the dog using tinymce for the feedback page mobile friendly uiASP.net MVC Awesome - jQuery Ajax Helpers: 4.0: version 4.0 ========================== - added InitPopup, InitPopupForm helpers, open initialized popups using awe.open(name, params) - added Tag or Tags for all helpers, usable in mod code - added popup aweclose, aweresize events - popups have api accessible via .data('api'), with methods open, close and destroy - all popups have .awe-popup class - popups can be grouped using .Group(string), only 1 popup in the same group can exist at the same time - added awebeginload event for...SEToolbox: SEToolbox 01.028.008 Release 2: Fixes for drag-drop copying. Added ship joiner, to rejoin a broken ship! Installation of this version will replace older version.ScreenToGif: Release 1.0: What's new: • Small UI tweaks everywhere. • You can add Text, Subtitles and Title Frames. • Processing the gif now takes less time. • Languages: Spanish, Italian and Tamil added. • Single .exe multi language. • Takes less time to apply blur and pixelated efect. • Restart button. • Language picker. (If the user wants to select a diferent language than the system's) • "Encoding Finished" page with a link to open the file. • GreenScreen unchanged pixels to save kilobytes. • "Check for Updates" t...Touchmote: Touchmote 1.0 beta 11: Changes Support for multiple monitor setup Additional settings for analog sticks More reliable pairing Bug fixes and improvementsMicrosoft Script Analyzer: Microsoft Script Analyzer 1.1: The Script Analyzer scans your current PowerShell script code against some PowerShell best practice rules, and provide suggestions to improve the script quality and readability. By double-clicking the checking result, the relevant script code will be highlighted in the code editor. Script Analyzer result You can configure the Script Analyzer rules in the Settings window of Script Analyzer. Script Analyzer settings The current release includes 5 PowerShell best practice rules. Invoke-...Microsoft Script Browser: Script Browser 1.1: Script Browser for Windows PowerShell ISE was designed and developed in response to many IT Pros’ and MVPs’ feedback during the MVP Global Summit. It puts nearly 10K script examples at IT Pros fingertips when they write scripts to automate their IT tasks. Users can search, learn, download and manage scripts from within their scripting environment - PowerShell ISE - with just a few button clicks. It saves the time of switching back and forth between webpages and scripting environment, and also...New ProjectsAgenda do Estudante: Aplicativo para auxiliar estudantes no gerenciamento do tempo, disponível na WEB, buscando possuir ampla acessibilidade. Code Analysis Rule Collection: Contains a set of diagnostics, code fixes and refactorings built on the Microsoft .NET Compiler Platform "Roslyn".CRM User Diagnostics Tool: DiagnosticsDbWord: this tools use xml.sql generate daily report and weekly report from word. EasyWork: ADO.NET Entity Framework Common Service Locator Unity Unity Interception Extension MEF ASP.NET MVC jQWidgetsEjerciciosFer: Ultimate Summary re LocoGestionEvenement: ASP .NET applicationINNOVA: Sistema web-movil para disponiblidad...,.Internal: This is an internal project for personal usageLearnserve CORE - SCORM LMS: Learnserve CORE LMS is a comprehensive .NET SCORM LMS Module designed to extend DNN into a CMS/LMS that can deliver any online training.M2B Gestion Entretiens: Gestion des entretiensModern DevOps Tools: Modern DevOps Tools is based around building an tools that uses recommended practices that will work inside of your continuous delivery cycle.Monopoly NOHI: Monopoly for the NOHIORT MAY 2014 HOTELUCHO C# MVC3 EF5: Academic web project using MVC 3, Entity Framework 5. Hotel reservations management. User registration and room reservations.ProfSteeve: ProfSteeve is a teaching tool.Project issue resolution lists: wenti Registry Settings Export Library: The main purpose of this project is to enable the creation of .reg files from registry keys.Scroll++: Enables scrolling ui-elements without focusing them. SIGCBI: Sistema para Gestión de las áreas de Almacén, Administración y Ventas del Complejo Turístico Baños del Inca.SISPERRHH - WEB: Sistema de Gestión de RRHH.Trabajo Practico Laboratorio V: Trabajo Practico Laboratorio VTraining_MEF_MVC: Training MEF MVCTraining_MEF_SimpleCalculator: Simple Calculator MEFTraining_MVC_ExploringMVCParts: Training Exploring MVC PartsTres Punto Cinco: Tres Punto CincoUI Exception Handler: UI Exception Handler is a simple library that provides error windows for unexpected .Net application exceptions.Widjet_jQueryUI_CustomLineSelector: Custom line selector

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Geolocation through Android's GPS Provider on a website?

    - by Corey Ogburn
    I'm trying to get the geolocation of the mobile device in a regular website, not a webview of an application or anything native like that. I'm getting a location, but it's highly inaccurate, the accuracy comes back as 3230 or some other outrageous number. I'm assuming that's in meters, either way it's not nearly accurate enough. By comparison, the same webpage on a laptop gets an accuracy of 30-40. My first thought was that it was using the Network Provider instead of the GPS Provider, telling me where I am based on tower location and reach. A little research later I found enableHighAccuracy and set it true in the options that I pass. After including that, I still notice no difference. Here's the test page's HTML/javascript: <html> <head> <script type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script> <script type="text/javascript"> function OnLoad() { $("#Status").text("Init"); if (navigator.geolocation) { $("#Status").text("Supports Geolocation"); navigator.geolocation.getCurrentPosition(HandleLocation, LocationError, { enableHighAccuracy: true }); $("#Status").text("Sent position request..."); } else { $("#Status").text("Doesn't support geolocation"); } } function HandleLocation(position) { $("#Status").text("Received response:"); $("#Position").text("(" + position.coords.latitude + ", " + position.coords.longitude + ") accuracy: " + position.coords.accuracy); var loc = new Microsoft.Maps.Location(position.coords.latitude, position.coords.longitude); GetMap(loc); } function LocationError(error) { switch(error.code) { case error.PERMISSION_DENIED: alert("Location not provided"); break; case error.POSITION_UNAVAILABLE: alert("Current location not available"); break; case error.TIMEOUT: alert("Timeout"); break; default: alert("unknown error"); break; } } function GetMap(loc) { var map = new Microsoft.Maps.Map(document.getElementById("mapDiv"), {credentials: "Aj59meaCR1e7rNgkfQy7j08Pd3mzfP1r04hGesGmLe2a3ZwZ3iGecwPX2SNPWq5a", center: loc, mapTypeId: Microsoft.Maps.MapTypeId.road, zoom: 15}); } </script> </head> <body onload="javascript:OnLoad()"> <div id="Status"></div> <div id="Position"></div><br/> <div id='mapDiv' style="position:relative; width:600px; height:400px;"></div> </body> </html> I'm testing this on a rooted MyTouch 3G running Cyanogen 6.1 stable, Android 2.2 and GPS is enabled. In case rooting was a problem, I have also had various friends and coworkers try the webpage on their non-rooted 2.0+ Android devices. Each phone had various effects on the accuracy, but none were better than 1000, I attribute this to the different carriers. I have not (but eventually will) tested with iPhone or other location-aware cell phones.

    Read the article

  • MVC design in Cocoa - are all 3 always necessary? Also: naming conventions, where to put Controller

    - by Nektarios
    I'm new to MVC although I've read a lot of papers and information on the web. I know it's somewhat ambiguous and there are many different interpretations of MVC patterns.. but the differences seem somewhat minimal My main question is - are M, V, and C always going to be necessary to be doing this right? I haven't seen anyone address this in anything I've read. Examples (I'm working in Cocoa/Obj-c although that shouldn't much matter).. 1) If I have a simple image on my GUI, or a text entry field that is just for a user's convenience and isn't saved or modified, these both would be V (view) but there's no M (no data and no domain processing going on), and no C to bridge them. So I just have some aspects that are "V" - seems fine 2) I have 2 different and visible windows that each have a button on them labeled as "ACTIVATE FOO" - when a user clicks the button on either, both buttons press in and change to say "DEACTIVATE FOO" and a third window appears with label "FOO". Clicking the button again will change the button on both windows to "ACTIVATE FOO" and will remove the third "FOO" window. In this case, my V consists of the buttons on both windows, and I guess also the third window (maybe all 3 windows). I definitely have a C, my Controller object will know about these buttons and windows and will get their clicks and hold generic states regarding windows and buttons. However, whether I have 1 button or 10 button, my window is called "FOO" or my window is called "BAR", this doesn't matter. There's no domain knowledge or data here - just control of views. So in this example, I really have "V" and "C" but no "M" - is that ok? 3) Final example, which I am running in to the most. I have a text entry field as my View. When I enter text in this, say a number representing gravity, I keep it in a Model that may do things like compute physics of a ball while taking in to account my gravity parameter. Here I have a V and an M, but I don't understand why I would need to add a C - a controller would just accept the signals from the View and pass it along to the Model, and vice versa. Being as the C is just a pure passthrough, it's really "junk" code and isn't making things any more reusable in my opinion. In most situations, when something changes I will need to change the C and M both in nearly identical ways. I realize it's probably an MVC beginner's mistake to think most situations call for only V and M.. leads me in to next subject 4) In Cocoa / Xcode / IB, I guess my Controllers should always be an instantiated object in IB? That is, I lay all of my "V" components in IB, and for each collection of View objects (things that are related) I should have an instantiated Controller? And then perhaps my Models should NOT be found in IB, and instead only found as classes in Xcode that tie in with Controller code found there. Is this accurate? This could explain why you'd have a Controller that is not really adding value - because you are keeping consistent.. 5) What about naming these things - for my above example about FOO / BAR maybe something that ends in Controller would be the C, like FancyWindowOpeningController, etc? And for models - should I suffix them with like GravityBallPhysicsModel etc, or should I just name those whatever I like? I haven't seen enough code to know what's out there in the wild and I want to get on the right track early on Thank you in advance for setting me straight or letting me know I'm on the right track. I feel like I'm starting to get it and most of what I say here makes sense, but validation of my guessing would help me feel confident..

    Read the article

  • How to play audio in Java Application

    - by user577829
    I'm making a java application and I need to play audio. I'm playing mainly small sound files of my cannon firing (its a cannon shooting game) and the projectiles exploding, though I plan on having looping background music. I have found two different methods to accomplish this, but both don't work how I want. The first method is literally a method: public void playSoundFile(File file) {//http://java.ittoolbox.com/groups/technical-functional/java-l/sound-in-an-application-90681 try { //get an AudioInputStream AudioInputStream ais = AudioSystem.getAudioInputStream(file); //get the AudioFormat for the AudioInputStream AudioFormat audioformat = ais.getFormat(); System.out.println("Format: " + audioformat.toString()); System.out.println("Encoding: " + audioformat.getEncoding()); System.out.println("SampleRate:" + audioformat.getSampleRate()); System.out.println("SampleSizeInBits: " + audioformat.getSampleSizeInBits()); System.out.println("Channels: " + audioformat.getChannels()); System.out.println("FrameSize: " + audioformat.getFrameSize()); System.out.println("FrameRate: " + audioformat.getFrameRate()); System.out.println("BigEndian: " + audioformat.isBigEndian()); //ULAW format to PCM format conversion if ((audioformat.getEncoding() == AudioFormat.Encoding.ULAW) || (audioformat.getEncoding() == AudioFormat.Encoding.ALAW)) { AudioFormat newformat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, audioformat.getSampleRate(), audioformat.getSampleSizeInBits() * 2, audioformat.getChannels(), audioformat.getFrameSize() * 2, audioformat.getFrameRate(), true); ais = AudioSystem.getAudioInputStream(newformat, ais); audioformat = newformat; } //checking for a supported output line DataLine.Info datalineinfo = new DataLine.Info(SourceDataLine.class, audioformat); if (!AudioSystem.isLineSupported(datalineinfo)) { //System.out.println("Line matching " + datalineinfo + " is not supported."); } else { //System.out.println("Line matching " + datalineinfo + " is supported."); //opening the sound output line SourceDataLine sourcedataline = (SourceDataLine) AudioSystem.getLine(datalineinfo); sourcedataline.open(audioformat); sourcedataline.start(); //Copy data from the input stream to the output data line int framesizeinbytes = audioformat.getFrameSize(); int bufferlengthinframes = sourcedataline.getBufferSize() / 8; int bufferlengthinbytes = bufferlengthinframes * framesizeinbytes; byte[] sounddata = new byte[bufferlengthinbytes]; int numberofbytesread = 0; while ((numberofbytesread = ais.read(sounddata)) != -1) { int numberofbytesremaining = numberofbytesread; sourcedataline.write(sounddata, 0, numberofbytesread); } } } catch (Exception e) { e.printStackTrace(); } } The problem with this is that my entire program stops until the sound file is finished, or at least nearly finished. The second method is this: File file = new File("Launch1.wav"); AudioClip clip; try { clip = JApplet.newAudioClip(file.toURL()); clip.play(); } catch (Exception e) { e.getMessage(); } The problem I have here is that every time the sound file ends early or doesn't play at all depending on where I place the code. Is their any way to play sound without the above mentioned problems? Am I doing something wrong? Any help is greatly appreciated.

    Read the article

  • Why does my performance slow to a crawl I move methods into a base class?

    - by Juliet
    I'm writing different implementations of immutable binary trees in C#, and I wanted my trees to inherit some common methods from a base class. However, I find. I have lots of binary tree data structures to implement, and I wanted move some common methods into in a base binary tree class. Unfortunately, classes which derive from the base class are abysmally slow. Non-derived classes perform adequately. Here are two nearly identical implementations of an AVL tree to demonstrate: AvlTree: http://pastebin.com/V4WWUAyT DerivedAvlTree: http://pastebin.com/PussQDmN The two trees have the exact same code, but I've moved the DerivedAvlTree.Insert method in base class. Here's a test app: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using Juliet.Collections.Immutable; namespace ConsoleApplication1 { class Program { const int VALUE_COUNT = 5000; static void Main(string[] args) { var avlTreeTimes = TimeIt(TestAvlTree); var derivedAvlTreeTimes = TimeIt(TestDerivedAvlTree); Console.WriteLine("avlTreeTimes: {0}, derivedAvlTreeTimes: {1}", avlTreeTimes, derivedAvlTreeTimes); } static double TimeIt(Func<int, int> f) { var seeds = new int[] { 314159265, 271828183, 231406926, 141421356, 161803399, 266514414, 15485867, 122949829, 198491329, 42 }; var times = new List<double>(); foreach (int seed in seeds) { var sw = Stopwatch.StartNew(); f(seed); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } // throwing away top and bottom results times.Sort(); times.RemoveAt(0); times.RemoveAt(times.Count - 1); return times.Average(); } static int TestAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree = AvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree = avlTree.Insert(rnd.NextDouble()); } return avlTree.Count; } static int TestDerivedAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree2 = DerivedAvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree2 = avlTree2.Insert(rnd.NextDouble()); } return avlTree2.Count; } } } AvlTree: inserts 5000 items in 121 ms DerivedAvlTree: inserts 5000 items in 2182 ms My profiler indicates that the program spends an inordinate amount of time in BaseBinaryTree.Insert. Anyone whose interested can see the EQATEC log file I've created with the code above (you'll need EQATEC profiler to make sense of file). I really want to use a common base class for all of my binary trees, but I can't do that if performance will suffer. What causes my DerivedAvlTree to perform so badly, and what can I do to fix it?

    Read the article

  • Output a php multi-dimensional array to a html table

    - by Fireflight
    I have been banging my head against the wall with this one for nearly a week now, and am no closer than I was the first day. I have a form that has 8 columns and a variable number of rows which I need to email to the client in a nicely formatted email. The form submits the needed fields as a multidimensional array. Rough example is below: <input name="order[0][topdiameter]" type="text" id="topdiameter0" value="1" size="5" /> <input name="order[0][bottomdiameter]" type="text" id="bottomdiameter0" value="1" size="5" /> <input name="order[0][slantheight]" type="text" id="slantheight0" value="1" size="5" /> <select name="order[0][fittertype]" id="fittertype0"> <option value="harp">Harp</option> <option value="euro">Euro</option> <option value="bulbclip">Regular</option> </select> <input name="order[0][washerdrop]" type="text" id="washerdrop0" value="1" size="5" /> <select name="order[0][fabrictype]" id="fabrictype"> <option value="linen">Linen</option> <option value="pleated">Pleated</option> </select> <select name="order[0][colours]" id="colours0"> <option value="beige">Beige</option> <option value="white">White</option> <option value="eggshell">Eggshell</option> <option value="parchment">Parchment</option> </select> <input name="order[0][quantity]" type="text" id="quantity0" value="1" size="5" /> This form is formatted in a table, and rows can be added to it dynamically. What I've been unable to do is get a properly formatted table out of the array. This is what I'm using now (grabbed from the net). <?php if (isset($_POST["submit"])) { $arr= $_POST['order'] echo '<table>'; foreach($arr as $arrs) { echo '<tr>'; foreach($arrs as $item) { echo "<td>$item</td>"; } echo '</tr>'; } echo '</table>; }; ?> This works perfectly for a single row of data. If I try submitting 2 or more rows from the form then one of the columns disappears. I'd like the table to be formatted as: | top | Bottom | Slant | Fitter | Washer | Fabric | Colours | Quantity | ------------------------------------------------------------------------ |value| value | value | value | value | value | value | value | with additional rows as needed. But, I can't find any examples that will generate that type of table! It seems like this should be something fairly straightforward, but I just can't locate an example that works the way I need it too.

    Read the article

  • Exit code 3 (not my return value, looking for source)

    - by Kathoz
    Greetings, my program exits with the code 3. No error messages, no exceptions, and the exit is not initiated by my code. The problem occurs when I am trying to read extremely long integer values from a text file (the text file is present and correctly opened, with successful prior reading). I am using very large amounts of memory (in fact, I think that this might be the cause, as I am nearly sure I go over the 2GB per process memory limit). I am also using the GMP (or, rather, MPIR) library to multiply bignums. I am fairly sure that this is not a file I/O problem as I got the same error code on a previous program version that was fully in-memory. System: MS Visual Studio 2008 MS Windows Vista Home Premium x86 MPIR 2.1.0 rc2 4GB RAM Where might this error code originate from? EDIT: this is the procedure that exits with the code void condenseBinSplitFile(const char *sourceFilename, int partCount){ //condense results file into final P and Q std::string tempFilename; std::string inputFilename(sourceFilename); std::string outputFilename(BIN_SPLIT_FILENAME_DATA2); mpz_class *P = new mpz_class(0); mpz_class *Q = new mpz_class(0); mpz_class *PP = new mpz_class(0); mpz_class *QQ = new mpz_class(0); FILE *sourceFile; FILE *resultFile; fpos_t oldPos; int swapCount = 0; while (partCount > 1){ std::cout << partCount << std::endl; sourceFile = fopen(inputFilename.c_str(), "r"); resultFile = fopen(outputFilename.c_str(), "w"); for (int i=0; i<partCount/2; i++){ //Multiplication order: //Get Q, skip P //Get QQ, mul Q and QQ, print Q, delete Q //Jump back to P, get P //Mul P and QQ, delete QQ //Skip QQ, get PP //Mul P and PP, delete P and PP //Get Q, skip P mpz_inp_str(Q->get_mpz_t(), sourceFile, CALC_BASE); fgetpos(sourceFile, &oldPos); skipLine(sourceFile); skipLine(sourceFile); //Get QQ, mul Q and QQ, print Q, delete Q mpz_inp_str(QQ->get_mpz_t(), sourceFile, CALC_BASE); (*Q) *= (*QQ); mpz_out_str(resultFile, CALC_BASE, Q->get_mpz_t()); fputc('\n', resultFile); (*Q) = 0; //Jump back to P, get P fsetpos(sourceFile, &oldPos); mpz_inp_str(P->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and QQ, delete QQ (*P) *= (*QQ); (*QQ) = 0; //Skip QQ, get PP skipLine(sourceFile); skipLine(sourceFile); mpz_inp_str(PP->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and PP, delete PP, print P, delete P (*P) += (*PP); (*PP) = 0; mpz_out_str(resultFile, CALC_BASE, P->get_mpz_t()); fputc('\n', resultFile); (*P) = 0; } partCount /= 2; fclose(sourceFile); fclose(resultFile); //swap filenames tempFilename = inputFilename; inputFilename = outputFilename; outputFilename = tempFilename; swapCount++; } delete P; delete Q; delete PP; delete QQ; remove(BIN_SPLIT_FILENAME_RESULTS); if (swapCount%2 == 0) rename(sourceFilename, BIN_SPLIT_FILENAME_RESULTS); else rename(BIN_SPLIT_FILENAME_DATA2, BIN_SPLIT_FILENAME_RESULTS); }

    Read the article

  • ANT: ways to include libraries and license issues

    - by Eric Tobias
    I have been trying to use Ant to compile and ready a project for distribution. I have encountered several problems along the way that I have been finally able to solve but the solution leaves me very unsatisfied. First, let me explain the set-up of the project and its dependencies. I have a project, lets call it Primary which depends on a couple of libraries such as the fantastic Guava. It also depends on another project of mine, lets call it Secondary. The Secondary project also features some dependencies, for example, JDOM2. I have referenced the Jar I build with Ant in Primary. Let me give you the interesting bits of the build.xml so you can get a picture of what I am doing: <project name="Primary" default="all" basedir="."> <property name='build' location='dist' /> <property name='application.version' value='1.0'/> <property name='application.name' value='Primary'/> <property name='distribution' value='${application.name}-${application.version}'/> <path id='compile.classpath'> <fileset dir='libs'> <include name='*.jar'/> </fileset> </path> <target name='compile' description='Compile source files.'> <javac includeantruntime="false" srcdir="src" destdir="bin"> <classpath refid='compile.classpath'/> </javac> <target> <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${build}/${distribution}.jar'> <fileset dir="bin"/> <zipgroupfileset dir="libs" includes="*.jar"/> </jar> </target> The Secodnary project's build.xml is nearly identical except that it features a manifest as it needs to run: <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${dist}/${distribution}.jar' basedir="${build}" > <fileset dir="${build}"/> <zipgroupfileset dir="libs" includes="*.jar"/> <manifest> <attribute name="Main-Class" value="lu.tudor.ssi.kiss.climate.ClimateChange"/> </manifest> </jar> </target> After I got it working, trying for many hours to not include that dependencies as class files but as Jars, I don't have the time or insight to go back and try to figure out what I did wrong. Furthermore, I believe that including these libraries as class files is bad practice as it could give rise to licensing issues while not packaging them and merely including them in a directory along the build Jar would most probably not (And if it would you could choose not to distribute them yourself). I think my inability to correctly assemble the class path, I always received NoClassDefFoundError for classes or libraries in the Primary project when launching Second's Jar, is that I am not very experienced with Ant. Would I require to specify a class path for both projects? Specifying the class path as . should have allowed me to simply add all dependencies to the same folder as Secondary's Jar, should it not?

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • Problem with incomplete type while trying to detect existence of a member function

    - by abir
    I was trying to detect existence of a member function for a class where the function tries to use an incomplete type. The typedef is struct foo; typedef std::allocator<foo> foo_alloc; The detection code is struct has_alloc { template<typename U,U x> struct dummy; template<typename U> static char check(dummy<void* (U::*)(std::size_t),&U::allocate>*); template<typename U> static char (&check(...))[2]; const static bool value = (sizeof(check<foo_alloc>(0)) == 1); }; So far I was using incomplete type foo with std::allocator without any error on VS2008. However when I replaced it with nearly an identical implementation as template<typename T> struct allocator { T* allocate(std::size_t n) { return (T*)operator new (sizeof(T)*n); } }; it gives an error saying that as T is incomplete type it has problem instantiating allocator<foo> because allocate uses sizeof. GCC 4.5 with std::allocator also gives the error, so it seems during detection process the class need to be completely instantiated, even when I am not using that function at all. What I was looking for is void* allocate(std::size_t) which is different from T* allocate(std::size_t). My questions are (I have three questions, but as they are correlated , so I thought it is better not to create three separate questions). Why MS std::allocator doesn't check for incomplete type foo while instantiating? Are they following any trick which can be implemented ? Why the compiler need to instantiate allocator<T> to check the existence of the function when sizeof is not used as sfinae mechanism to remove/add allocate in the overload resolutions set? It should be noted that, if I remove the generic implementation of allocate leaving the declaration only, and specialized it for foo afterwards such as struct foo{}; template< struct allocator { foo* allocate(std::size_t n) { return (foo*)operator new (sizeof(foo)*n); } }; after struct has_alloc it compiles in GCC 4.5 while gives error in VS2008 as allocator<T> is already instantiated and explicit specialization for allocator<foo> already defined. Is it legal to use nested types for an std::allocator of incomplete type such as typedef foo_alloc::pointer foo_pointer; ? Though it is practically working for me, I suspect the nested types such as pointer may depend on completeness of type it takes. It will be good to know if there is any possible way to typedef such types as foo_pointer where the type pointer depends on completeness of foo. NOTE : As the code is not copy paste from editor, it may have some syntax error. Will correct it if I find any. Also the codes (such as allocator) are not complete implementation, I simplified and typed only the portion which I think useful for this particular problem.

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • CodePlex Daily Summary for Thursday, April 01, 2010

    CodePlex Daily Summary for Thursday, April 01, 2010New ProjectsASP.NET Bing Maps: Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map style, add a PushPin...Bricks' Bane: Bricks' Bane is a brick breaker game developed using XNA and published on XBox Live Indy Games. Source code includes a C# library useful for game d...cURL for dotnet: Another dotnet binding for libcurl see http://curl.haxx.se for more info about cURL/libcurlCustom Functoid que acessa o banco de dados SQL: Functoid para Biztalk Server 2006 utilizando dados do SQL Server 2005FEI STU Pharmacy e-shop: Elektronicky obchod s liekmi Vytvorte jednoduchú klient-server aplikáciu, ktorá bude realizovať elektronický obchod s liekmi. Moduly: 1. e-shop f...Flavours of Wix: Investigating building DSL's to create installers based on WIXFulcrum: Fulcrum is a code generation framework built on top of the T4 technology in Visual Studio. GreviousAngel: New team projectHabanero Inferno: Habanero Inferno coming soon.Kawo Pounga !: A useless game !!!LetsXNA!!: This is a project created by members of Linked In group Lets XNA!! to build a XNA game and have fun in the process. The goal is to build a simple ...Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: <project name>Linq to Naver </project name> <programming language>C#, CSharp</programming language>LocoSync: LocoSync is a file Syncronization/Backup/Archiver program, which is easily extendable. It is easy to add new syncronization methods using C# code.Natural Language Processing: Natural Language ProcessingNop Commerce Azure: Ce projet vous permet de mettre en place rapidement et simplement votre site d'e-commerce en ligne en bénéficiant de tous les avantages de la plate...Nwinsock: Nwinsock is a component for network , Object Transfer, Pocket Compression, Support TCP,UDP Protocol, Thread Base OnTime: OnTime is a simple program from that matching game back in the day just to bring light to programming techniques. It's developed in C#.?OpenGL ES 2.0 Compact Framework Wrapper: OpenGL ES 2.0 wrapper for .NET Compact Framework. Developed on HTC HD 2 device but should run on any Windows Mobile device that has the correct lib...ortaknokta: bu proje: birkaç kişinin bir araya gelip, istedikleri konularda tartışma yapmalarına olanak saglamak icin hazırlanmaya çalışıl maktadır. P-Data: P-Data es una herramienta que permite obtener información procedente de archivos de datos (Data Profiling) a través de consultas SQL, automatizando...PowerAuras: Addon for World of Warcraft - Displays effects on screen at different conditionsPowerShell ToodleDo Module: PowerShell Module for interacting with toodledo.com online To-Do list site. RSS Reader for Windows Phone 7: This RSS Reader application for windows 7Streamlet Containers: This is my implement of STL-style containers, including a dynamic array, a double-linked list and an r-b-tree. Just for practice. Please feel free...Troav: Social encyclopedia built using c# and the Orchard frameworkUmbraco App_Code/Usercontrol Editor: Package for Umbraco to add App_Code and usercontrol editing to the Developer section of the Umbraco administration system. Will support GeSHi editi...Vczh Reactive Programming Library: Reactive programming library provide a stream or state machine view to use .NET eventsWhoIs XML API: The project uses the public WhoIs XML API service (http://www.whoisxmlapi.com/) to obtain detailed details. The project is written in C# and serial...WPF FlowDocument Examples for VS2008 and VS2010: WPF Text Samples (especially FlowDocument) on the various possible effects: sub- and super-script, ruby (a.k.a. furigana), and various others...You are here (for Windows Mobile): This sample shows you how to play a *.wav file on your device with Compact Framework 2.0. There is better support for playing music on Compact F...New Releases( λunula ): Lunula 0.4.0: Changelog Implemented a virtual machine. Implemented a compiler for the virtual machine. Added first-class continuations (call/cc) Removed co...Alter gear SQL index Management: Setup 1.0.1: Changes Test connection - successful message Connection string timeout property added Setup Project added to project source code Possible issu...ASP.NET Bing Maps: ASP.NET Bing Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map ...ASP.NET MVC Validation Library: ASP.NET MVC Validation Library 1.3: Changes since 1.2: - Support remote validation - Support custom server-side validation - The design of validation attribute is improved Note: test...BigDays 2010: HelfenHelfen - v1: PLEASE NOTE: This project is published under the Microsoft Public License (Ms-PL). http://bigdays10.codeplex.com/license IT IS A DEMO SOLUTION FOR...Caps - Manage your collection!: Caps Console 0.1.4.0 Alpha: This is preview release (Alpha quality). This release contains only limited amount of fixes and new features from user point of view. Major focus f...CSharpQuery: Version 1.0: This version is stable. Please report any possible bugs. The next release will include a sample project and index management tools. Until then pl...Custom Functoid que acessa o banco de dados SQL: Custom Functoid SQL Server: Solução do Visual Studio com código fonte e script SQL do functoid em BiztalkDawf: Dual Audio Workflow: Beta 3: Suppose if two good audio events overlap in time with a videoevent of interest. (This can only happen if PluralEyes isn't used on everything). Befo...Dirac codec user interface: Dirac User Interface (checkin 37132): Same as 36795 version, but done with the last source code.DotNetNuke® Blog: 04.00.00 RC 3: PLEASE NOTE: You may upgrade an RC 2 install. But please do not upgrade previous version of the Beta releases - please start from RC 2 or 03.05.0...DotNetNuke® Skinning Extensions: SimpleTitle Skin Object: This is an example skin object that only renders the "page name" if used in a skin and the "module title" if used in a container. No extra spans, c...Fulcrum: Fulcrum v0.9: Initial release of FulcrumHelloTipi Photos Uploader: Version 2010.03.31: De toute petites corrections : - Correction du bouton envoyer - Impossible d'interagir avec l'application quand on uploadkdar: KDAR 0.0.18: KDAR - Kernel Debugger Anti Rootkit - dispacth table's signature bases updated ( many driver's) - scripts refactored - some bug fixedLegend: Legend Libraries: The latest release.Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: Linq to Naver: Linq to NaverLive at Education Meta Web-Service: Live at Education Meta Web Service v. 1.0: We're happy to publish final version of Live at Education Meta Web Serivce (LAEMWS). In this release: Huge list of Windows Live ID enabled servic...Live@edu SSO WebPart for MOSS 2007: WebPart 2.0: This release is based on Live@edu Meta Web Service (laemws - http://laemws.codeplex.com). It is highly recomended to use laemws version of webpart,...LocoSync: LocoSync v0.1r2010.03.31 installer: This is the first public release. Unzip and run setup. Or if you have .net 3.5 runtime available download the exetutable and try...Natural Language Processing: test1: testNop Commerce Azure: Nop Commerce Azure: Nop Commerce Full Sources with additionnals Azure Projects.Nwinsock: NWinsock: Nwinsock version 1.0 is hereOpen NFe: DANFe v1.9.8: Correção CSTOpenGL ES 2.0 Compact Framework Wrapper: v0.1 Sources: First rough release. It has a working sample application which renders a triangle with rotation. Don't expect anything great. Just a very early ...patterns & practices - Windows Azure Guidance: Code Drop 3: Second iteration of a-Expense on Azure. This release builds on the previous one and mainly focuses on replacing SQL Azure by Table Storage. We hav...Posh4DNN: Posh4DNN Scripts 2.0: This release greatly increases the speed of installation and incorporates the use of IIS and SQL Server Snap-ins for managing those services. Inst...Process Enactment Tool Framework: PET 1.1: PET Core new intermediate model with arbitrary "clean" relations among objects and several updates of the object fields (see DependencyInterfacesA...Project Tru Tiên: Elements-test V1-fix (v1): Là Elements-test V1 đã được fix các vấn đề sau: - Fix lỗi hiển thị thú cưỡi Hổ Kỳ Lân - Fix hiển thị tab tiếng trung --> sang tiếng việt - Fix hiể...Sentinel - Log Viewer: Sentinel 0.8.1 (nLog support): Build of the 0.8.1 code (svn revision 36823) which included support for both nLog and log4net that has been in SVN for a while but didn't have a bi...sgMotion Animation Library: SgMotion v1.1 (For Sunburn 1.3.1): SgMotion v1.1 (For Sunburn 1.3.1) This release includes both a Windows & Xbox sample. The sample is set to default at Forward rendering, but can e...sTASKedit: sTASKedit 44538 (Developer Alpha): + nearly all fields are viewed in this release for task verification and identifying of unknownsTest Project (ignore): asdf asdf asdf asdf asdf asdf asdf sadf sdf asdf a: ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ...Test Project (ignore): cdscs: csdcacacTroav: Traov20100331 Source Pre-Alpah: This is some experiements with implementing custom modules with Microsoft's Orchard frame work. This is very preliminary, and subject to change.Weather Report WebControls: WebWeatherReport: 主要文件的源代码WhoIs XML API: Initial Release: Initial ReleaseYou are here (for Windows Mobile): CAB file and Source Code: You can find more Controls and samples for Windows Mobile developers at: http://www.beemobile4.netMost Popular ProjectshmrEngineRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61  | Next Page >