Search Results

Search found 11100 results on 444 pages for 'normal distribution'.

Page 275/444 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • PKG can't silent install on Mac Os 10.5

    - by ericdm
    I have made an Installer by PackageMaler3.0.6 on Mac OS 10.8. Also I have add a JavaScript function in Distribution,This function use for detect the certain App is running or not. Some code like this: var allProcess = new Array(); allProcess = system.applications.all(); var allProcessCount = allProcess.length; ... If I normally install (With Installer UI) this pkg on 10.8,10.7,10.5, it's Ok, all function works fine. If i use command line to silent install On 10.8,10.7 it's OK, no error. But if i silent install on 10.5.8, there will be an error in terminal(JavaScript error), can't install. If i remove the code of "var allProcessCount = allProcess.length;" It can silent install on 10.5.8, once if added the code like "allProcess.length" ,there will be an error,it looks like can't use the array property in silent install on 10.5, but 10.7,10.8 it's OK and install with UI it's also Ok on 10.5. Did anyone knows how can i slove this issue? Thanks!!!

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Which is the best license for my Open Source project?

    - by coderex
    I am a web developer, and I don't have enough knowledge about software licenses. I wish to publish some of my works, and I need to select licenses for them. My software product is free of cost. But I have some restrictions on the distribution/modification of the code. It's free of cost (but donations are acceptable ;-)). The source code is freely available. You can use, customize or edit/remove code (as long as the basic nature of the software is not changed). You don't have any permission to change the product name. There are some libraries and classes which are in a folder caller "myname". You don't have the permission to rename "myname". You can contribute any additions or modifications to my project, to the original source repository (the contributors name/email/site link will be listed on the credit file). You can't remove the original author's name from the license. You can put the license file or license code anywhere in the project file or folder. You can redistribute this code as free or commercial software. :) Do you think all these restrictions are valid? Given these restrictions, which license should I use? Edit 1:- My main intention is to make the product more popular with free source code while ensuring the original author is not ignored. The product is open. Edit 2:- Thank you all, the above points are because of my lack of knowledge of license terms. You can help me to correct or remove some of the above points. What I'm basically looking for is in my Edit 1.

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • How can I compile .NET 3.5 C# code on a system without Visual Studio?

    - by JimEvans
    I have some C# code that uses some constructs specific to .NET 3.5. When you install the .NET Framework distribution, you get the C# compiler installed with it (csc.exe). Even if I specify the csc.exe in C:\Windows\Microsoft.NET\Framework\v3.5, I cannot compile the code on a computer with only the .NET Framework installed, but not Visual Studio. I am able to compile code that uses v2.0 constructs without difficulty. How can I accomplish this? Here is a sample that demonstrates my problem: using System; class Program { public static void Main() { // The MacOSX value to the PlatformID enum was added after // .NET v2.0 if (Environment.OSVersion.Platform == PlatformID.MacOSX) { Console.WriteLine("Found mac"); } Console.WriteLine("Simple program"); } } When compiling this code using csc.exe, I receive the following error: test.cs(9, 58): error CS0117: 'System.PlatformID' does not contain a definition for 'MacOSX' When executing csc.exe /? I receive the banner: Microsoft (R) Visual C# 2008 Compiler version 3.5.21022.8 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved.

    Read the article

  • Update on: How to model random non-overlapping spheres of non-uniform size in a cube using Matlab?

    - by user3838079
    I am trying to use MATLAB for generating random locations for non-uniform size spheres (non-overlapping) in a cube. The for loop in the code below never seems to end. I don't know what am missing in the code. I have ran the code for no. of spheres (n) = 10; dims = [ 10 10 10 ] function [ c r ] = randomSphere( dims ) % creating one sphere at random inside [0..dims(1)]x[0..dims(2)]x... % radius and center coordinates are sampled from a uniform distribution % over the relevant domain. % output: c - center of sphere (vector cx, cy,... ) % r - radius of sphere (scalar) r = rand(1); % you might want to scale this w.r.t dims or other consideration c = r + rand( size(dims) )./( dims - 2*r ); % make sure sphere does not exceed boundaries function ovlp = nonOverlapping( centers, rads ) % check if several spheres with centers and rads overlap or not ovlp = false; if numel( rads ) == 1 return; % nothing to check for a single sphere end dst = sqrt( sum( bsxfun( @minus, permute( centers, [1 3 2] ),... permute( centers, [3 1 2] ) ).^2, 3) ); ovlp = dst >= bsxfun( @plus, rads, rads.' ); %' all distances must be smaller than r1+r2 ovlp = any( ovlp(:) ); % all must not overlap function [centers rads] = sampleSpheres( dims, n ) % dims is assumed to be a row vector of size 1-by-ndim % preallocate ndim = numel(dims); centers = zeros( n, ndim ); rads = zeros( n, 1 ); ii = 1; while ii <= n [centers(ii,:), rads(ii) ] = randomSphere( dims ); if nonOverlapping( centers(1:ii,:), rads(1:ii) ) ii = ii + 1; % accept and move on end end

    Read the article

  • Facebook dialog appears but always says "An error has occurred"

    - by Conor James
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> ... <head> ... <script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script> </head> <body> <div id="fb-root"></div> <fb:like id="fb-like" href="http://www.example.com" layout="button_count" show_faces="false" width="100"></fb:like> ... myscript.js: FB.ui( { method: 'stream.publish', message: 'getting educated about Facebook Connect', attachment: { name: 'Connect', caption: 'The Facebook Connect JavaScript SDK', description: ( 'A small JavaScript library that allows you to harness ' + 'the power of Facebook, bringing the user\'s identity, ' + 'social graph and distribution power to your site.' ), href: 'http://github.com/facebook/connect-js' }, action_links: [ { text: 'Code', href: 'http://github.com/facebook/connect-js' } ], user_message_prompt: 'Share your thoughts about Connect' }, function(response) { if (response && response.post_id) { alert('Post was published.'); } else { alert('Post was not published.'); } } ); I have Facebook like button on my page which works fine. But when I call the FB.ui method above from my JavaScript source, the Facebook dialog pops up but displays this error message: **An error occurred. Please try again later.** This has happened repeatedly for two days since I started trying to implement it. Not a very helpful error message. Any idea what might cause it or how to narrow down the problem?

    Read the article

  • How do I add the go language to gitg's list of viewable sources?

    - by Hotei
    Hoping a 'git' guru will help out here. I am just beginning to "git" for the first time and have (among other things :-) ) git and gitg installed from Ubuntu 10.4 / AMD64 distribution (ie. maybe not 'latest' version but not ancient). I am trying to look at the go code I've committed via gitg and in the "tree tab" it says :Cannot display file content as text. However, the "details tab" shows the diffs of the same file just fine. I know gitg's "tree tab" is working because I can use the tree view on *.c / *.html / *.txt etc just fine. Is there a way to tweak gitg into understanding that "*.go" is just text? A little more context: Installed gitg version is 0.0.5 - ie a version behind latest - 0.0.6 - source of which I am looking thru now. I do have a working /usr/share/gtksourceview-2.0/language-specs/ go.lang. It works just fine as highlighter in gedit. It appears that gitg may require displayable files to have a mime type of "text/plain", so I added that to go.lang No joy. gitg still fails on *.go I'm relatively sure the fix is simple, just don't know where to look.

    Read the article

  • Which Perl moudle can handle variety of date formats with unicode characters ?

    - by ram
    My requirement is parsing xml files which contains wide varieties of timestamps based on the locales at which they are written. They may contain Unicode characters in case of Chinese or Korean locales. I have to parse these timestamps and put then in a standard format something like 2009-11-26 12:40:54 to put them in a oracle database. Sometimes I may not even know the locale and yet I have to parse the timestamps. I am looking for a module that automatically detects the timestamp format (including unicode characters for am and pm in their local language) and converts in to epoch time so that I can convert it back to what ever way I like to. I have gone through similar questions in this forum. Few suggested DateFormat module, and Date::Parse module. The perl distribution I am using is 5.10 so Date::Manip doesn't come as a core module. As I am supposed to use just the basic core modules and few CPAN modules(on request I cannot ask for all), I request you to kindly suggest me a good module that suffices all my requirements. Thanks in advance

    Read the article

  • Help me understand making maven project w/ non-maven jar dependencies usable by others

    - by deet
    Hi, I'm in the process of learning maven (and java packaging & distribution) with a new oss project I'm making as practice. Here's my situation, all java of course: My main project is ProjectA, maven-based in a github repository. I have also created one utility project, maven-based, in github: ProjectB. ProjectA depends on a project I have heavily modified that originally was from a google-code ant-based repository, ProjectC. So, how do I set up the build for ProjectA such that someone can download ProjectA.jar and use it without needing to install jars for ProjectB and ProjectC, and also how do I set up the build such that someone could check out ProjectA and run only 'mvn package' for a full compile? (Additionally, what should I do with my modified version of ProjectC? include the class files directly into ProjectA, or fork the project into something that could then be used by as a maven dependency?) I've been reading around, links such as this SO question and this SO question, but I'm unclear how those relate to my particular circumstance. So, any help would be appreciated. Thanks!

    Read the article

  • OSGi bundle imports packages from non-bundle jars: create bundles for them?

    - by John Simmons
    I am new to OSGi, and am using Equinox. I have done several searches and can find no answer to this. The discussion at OSGI - handling 3rd party JARs required by a bundle helps somewhat, but does not fully answer my question. I have obtained a jar file, rabbitmq-client.jar, that is already packaged as an OSGi bundle (with Bundle-Name and other such properties in its MANIFEST.MF), that I would like to install as a bundle. This jar imports packages org.apache.commons.io and org.apache.commons.io.input from commons-io-1.2.jar. The RabbitMQ client 2.7.1 distribution also includes commons-cli-1.1.jar, so I presume that it is required as well. I examined the manifests of these commons jars and found that they do not appear to be packaged as bundles. That is, their manifests have none of the standard bundle properties. My specific question is: if I install rabbitmq-client.jar as a bundle, what is the proper way to get access to the packages that it needs to import from the commons jars? There are only three alternatives that I can think of, without rebuilding rabbitmq-client.jar. The packages from the commons jars are already included in the Equinox global classpath, and rabbitmq-client.jar will get them automatically from there. I must make another bundle with the two commons jars, export the needed packages, and install that bundle in Equinox. I must put these two commons jars in the global classpath when I start Equinox, and they will be available to rabbitmq-client.jar from there. I have read that one normally does not use the global classpath in an OSGi container. I am not clear on whether items from the global classpath are even available when building individual bundle classpaths. However, I note that rabbitmq-client.jar also imports other packages such as javax.net, which I presume come from the global classpath. Or is there some other bundle that exports them? Thanks for any assistance!

    Read the article

  • How to Create the Upload File for Application Loader?

    - by Ohad Regev
    When I use Application Loader, I get to the point where it asks me to "Choose..." the file to be uploaded. If I understand correctly, it supposes to be the appName.app file I see under "Products" on my app bundle (I right click it and select "Show in Finder" to get to the specific file in library; then I'm supposed to ZIP it and the ZIP file is what I will choose in Application Loader). First, am I correct with this assumption? if yes... What should I define different in XCode than the way I used to build the application for testing (on simulator and on my personal iPhone)? Should I change the Info---Command-line build use from Debug to Release? How should I define the Build Settings---Code Signing section (in which field should I select the "iPhone Developer" option and in which should it be "iPhone Distribution")? Are there any other important Info/Build Settings/p.list/etc... fields I should relate to? any help will be appreciated...

    Read the article

  • Randomly Position A Div

    - by Connie
    I'm looking for a way to randomly position a div anywhere on a page (ONCE per load). I'm currently programming a PHP-based fantasy pet simulation game. One of the uncommon items on the game is a "Four Leaf Clover." Currently, users gain "Four Leaf Clovers" through random distribution - but I would like to change that (it's not interactive enough!). What I Am Trying To Do: The idea is to make users search for these "Four Leaf Clovers" by randomly placing this image anywhere on the page: (my rendition of a four leaf clover) I'd like to do this using a Java/Ajax script that generates a div, and then places it anywhere on the page. And does not move it once it's been placed, until the page is reloaded. I've tried so many Google searches, and the closest thing that I've found so far is this (click), from this question. But, removing the .fadein, .delay, and .fadeout stopped the script from working entirely. I'm not by any means a pro with Ajax. Is what I'm trying to do even possible?

    Read the article

  • VB6 ActiveX exe - what is the proper registration sequence?

    - by Timbuck
    I have recently updated a Visual Basic 6 application that is an ActiveX exe, running on Windows XP. I have a couple of testers for this application who have received a copy of the exe and are attempting to run it. However, they are getting an error message "Unexpected error;quitting" when trying to do so. A key difference between their testing and my testing is that on the machines I tested on, I have admin rights and was able to register the application using the appname.exe /regserver command line. Reading the details at MS Support about file registration appears unclear: Visual Basic ActiveX EXE files register themselves the first time you run the EXE. However, you cannot use the EXE as a COM server until it is registered. So does this mean that after the first time the users run the exe that the application should be correctly registered, and the error I am receiving is sign of something other than an incorrectly registered application? Or does this mean that the application will not work properly until such time as the file is explicitly registered using the appname.exe /regserver command line? nb - during a production distribution, the software would be sent out to client PCs using Systems Management Server, which isn't an option for this testing.

    Read the article

  • Using "from __future__ import division" in my program, but it isn't loaded with my program

    - by Sara Fauzia
    I wrote the following program in Python 2 to do Newton's method computations for my math problem set, and while it works perfectly, for reasons unbeknownst to me, when I initially load it in ipython with %run -i NewtonsMethodMultivariate.py, the Python 3 division is not imported. I know this because after I load my Python program, entering x**(3/4) gives "1". After manually importing the new division, then x**(3/4) remains x**(3/4), as expected. Why is this? # coding: utf-8 from __future__ import division from sympy import symbols, Matrix, zeros x, y = symbols('x y') X = Matrix([[x],[y]]) tol = 1e-3 def roots(h,a): def F(s): return h.subs({x: s[0,0], y: s[1,0]}) def D(s): return h.jacobian(X).subs({x: s[0,0], y: s[1,0]}) if F(a) == zeros(2)[:,0]: return a else: while (F(a)).norm() > tol: a = a - ((D(a))**(-1))*F(a) print a.evalf(10) I would use Python 3 to avoid this issue, but my Linux distribution only ships SymPy for Python 2. Thanks to the help anyone can provide. Also, in case anyone was wondering, I haven't yet generalized this script for nxn Jacobians, and only had to deal with 2x2 in my problem set. Additionally, I'm slicing the 2x2 zero matrix instead of using the command zeros(2,1) because SymPy 0.7.1, installed on my machine, complains that "zeros() takes exactly one argument", though the wiki suggests otherwise. Maybe this command is only for the git version.

    Read the article

  • Which is the best license for Open Source project?

    - by coderex
    Hi friends, I am a web developer, and i don't have enough knowledge in software licenses. I wish to publish some of my works. and i need to apply a licenses for that. My software/product is free of cost. But I have some restrictions on the distribution/Modification. Its free of cost.(but donations are acceptable ;-)) Source Code is freely available; that you can use, customize or edit/remove(but don't deviate the basic nature of the software). You don't have any permission to change the product name. There are some lib or class are and which is put into a folder caller "myname", you don't have the permission to rename "myname". You can contribute any additions modifications to my project, to the original source repository. (The contributors name/email/site link will be listed on the credit file). Don't remove the original author's name from the license. Put the license file or license code any where is in the project file or folder. You can redistribute this code as free or commercial. :) What you this, all these kinda restrictions are valid or ... ? For this, which license i want to use.

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • Split a string by comma, quote and full-stop.. with a few exceptions

    - by dunc
    I've got a lot of text, similar to the following paragraph, which I'd like to split into words without punctuation (', ", ,, ., newline etc).. with a few exceptions. Initially considered endemic to the Chalakudy River system in Kerala state, southern India, but now recognised to have a wider distribution in surrounding drainages including the Periyar, Manimala, and Pamba river though the Manimala data may be questionable given it seems to be the type locality of P. denisonii. In the Achankovil River basin it occurs sympatrically, and sometimes syntopically, with P. denisonii. Wild stocks may have dwindled by as much as 50% in the last 15 years or so with collection for the aquarium trade largely held responsible although habitats are also being degraded by pollution from agricultural and domestic sources, plus destructive fishing methods involving explosives or organic toxins. The text refers to P. denisonii which is a species of fish. It's an abbreviation of Genus species. I would like this reference to be one word. So, for instance, this is the kind of array I'd like to see: Array ( ... [44] given [45] it [46] seems [47] to [48] be [49] the [50] type [51] locality [52] of [53] P. denisonii [54] In [55] the ... ) The only things that distinguish these species references such as P. denisonii from a new sentence like end. New are: The P (for Puntius, as in the P. in the aforementioned example) is only ever one letter, always a capital the d (as in . denisonii) is always either a lower case letter or an apostrophe (') What regexp can I use with preg_split to give me such an array? I've tried a simple explode( " ", $array ) but it doesn't do the job at all. Thanks in advance,

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • Fastest method to define whether a number is a triangular number

    - by psihodelia
    A triangular number is the sum of the n natural numbers from 1 to n. What is the fastest method to find whether a given positive integer number is a triangular one? I suppose, there must be a hidden pattern in a binary representation of such numbers (like if you need to find whether a number is even/odd you check its least significant bit). Here is a cut of the first 1200th up to 1300th triangular numbers, you can easily see a bit-pattern here (if not, try to zoom out): (720600, '10101111111011011000') (721801, '10110000001110001001') (723003, '10110000100000111011') (724206, '10110000110011101110') (725410, '10110001000110100010') (726615, '10110001011001010111') (727821, '10110001101100001101') (729028, '10110001111111000100') (730236, '10110010010001111100') (731445, '10110010100100110101') (732655, '10110010110111101111') (733866, '10110011001010101010') (735078, '10110011011101100110') (736291, '10110011110000100011') (737505, '10110100000011100001') (738720, '10110100010110100000') (739936, '10110100101001100000') (741153, '10110100111100100001') (742371, '10110101001111100011') (743590, '10110101100010100110') (744810, '10110101110101101010') (746031, '10110110001000101111') (747253, '10110110011011110101') (748476, '10110110101110111100') (749700, '10110111000010000100') (750925, '10110111010101001101') (752151, '10110111101000010111') (753378, '10110111111011100010') (754606, '10111000001110101110') (755835, '10111000100001111011') (757065, '10111000110101001001') (758296, '10111001001000011000') (759528, '10111001011011101000') (760761, '10111001101110111001') (761995, '10111010000010001011') (763230, '10111010010101011110') (764466, '10111010101000110010') (765703, '10111010111100000111') (766941, '10111011001111011101') (768180, '10111011100010110100') (769420, '10111011110110001100') (770661, '10111100001001100101') (771903, '10111100011100111111') (773146, '10111100110000011010') (774390, '10111101000011110110') (775635, '10111101010111010011') (776881, '10111101101010110001') (778128, '10111101111110010000') (779376, '10111110010001110000') (780625, '10111110100101010001') (781875, '10111110111000110011') (783126, '10111111001100010110') (784378, '10111111011111111010') (785631, '10111111110011011111') (786885, '11000000000111000101') (788140, '11000000011010101100') (789396, '11000000101110010100') (790653, '11000001000001111101') (791911, '11000001010101100111') (793170, '11000001101001010010') (794430, '11000001111100111110') (795691, '11000010010000101011') (796953, '11000010100100011001') (798216, '11000010111000001000') (799480, '11000011001011111000') (800745, '11000011011111101001') (802011, '11000011110011011011') (803278, '11000100000111001110') (804546, '11000100011011000010') (805815, '11000100101110110111') (807085, '11000101000010101101') (808356, '11000101010110100100') (809628, '11000101101010011100') (810901, '11000101111110010101') (812175, '11000110010010001111') (813450, '11000110100110001010') (814726, '11000110111010000110') (816003, '11000111001110000011') (817281, '11000111100010000001') (818560, '11000111110110000000') (819840, '11001000001010000000') (821121, '11001000011110000001') (822403, '11001000110010000011') (823686, '11001001000110000110') (824970, '11001001011010001010') (826255, '11001001101110001111') (827541, '11001010000010010101') (828828, '11001010010110011100') (830116, '11001010101010100100') (831405, '11001010111110101101') (832695, '11001011010010110111') (833986, '11001011100111000010') (835278, '11001011111011001110') (836571, '11001100001111011011') (837865, '11001100100011101001') (839160, '11001100110111111000') (840456, '11001101001100001000') (841753, '11001101100000011001') (843051, '11001101110100101011') (844350, '11001110001000111110') For example, can you also see a rotated normal distribution curve, represented by zeros between 807085 and 831405?

    Read the article

  • Mobile enabled web apps with ASP.NET MVC 3 and jQuery Mobile

    - by shiju
    In my previous blog posts, I have demonstrated a simple web app using ASP.NET MVC 3 and EF Code First. In this post, I will be focus on making this application for mobile devices. A single web site will be used for both mobile browsers and desktop browsers. If users are accessing the web app from mobile browsers, users will be redirect to mobile specific pages and will get normal pages if users are accessing from desktop browsers. In this demo app, the mobile specific pages are maintained in an ASP.NET MVC Area named Mobile and mobile users will be redirect to MVC Area Mobile. Let’s add a new area named Mobile to the ASP.NET MVC app. For adding Area, right click the ASP.NET MVC project and  select Area from Add option. Our mobile specific pages using jQuery Mobile will be maintained in the Mobile Area. ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area Let’s add an ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area. The below Global filter is taken from the sample app http://aspnetmobilesamples.codeplex.com/ created by the ASP.NET team. The below filer will redirect the Mobile visitors to an ASP.NET MVC Area Mobile. public class RedirectMobileDevicesToMobileAreaAttribute : AuthorizeAttribute     {         protected override bool AuthorizeCore(System.Web.HttpContextBase httpContext)         {             // Only redirect on the first request in a session             if (!httpContext.Session.IsNewSession)                 return true;               // Don't redirect non-mobile browsers             if (!httpContext.Request.Browser.IsMobileDevice)                 return true;               // Don't redirect requests for the Mobile area             if (Regex.IsMatch(httpContext.Request.Url.PathAndQuery, "/Mobile($|/)"))                 return true;               return false;         }           protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)         {             var redirectionRouteValues = GetRedirectionRouteValues(filterContext.RequestContext);             filterContext.Result = new RedirectToRouteResult(redirectionRouteValues);         }           // Override this method if you want to customize the controller/action/parameters to which         // mobile users would be redirected. This lets you redirect users to the mobile equivalent         // of whatever resource they originally requested.         protected virtual RouteValueDictionary GetRedirectionRouteValues(RequestContext requestContext)         {             return new RouteValueDictionary(new { area = "Mobile", controller = "Home", action = "Index" });         }     } Let’s add the global filer RedirectMobileDevicesToMobileAreaAttribute to the global filter collection in the Application_Start() of Global.asax.cs file   GlobalFilters.Filters.Add(new RedirectMobileDevicesToMobileAreaAttribute(), 1); Now your mobile visitors will be redirect to the Mobile area. But the browser detection logic in the RedirectMobileDevicesToMobileAreaAttribute filter will not be working in some modern browsers and some conditions. But the good news is that ASP.NET’s browser detection feature is extensible and will be greatly working with the open source framework 51Degrees.mobi. 51Degrees.mobi is a Browser Capabilities Provider that will be working with ASP.NET’s Request.Browser and will provide more accurate and detailed information. For more details visit the documentation page at http://51degrees.codeplex.com/documentation. Let’s add a reference to 51Degrees.mobi library using NuGet We can easily add the 51Degrees.mobi from NuGet and this will update the web.config for necessary configuartions. Mobile Web App using jQuery Mobile Framework jQuery Mobile Framework is built on top of jQuery that provides top-of-the-line JavaScript in a unified User Interface that works across the most-used smartphone web browsers and tablet form factors. It provides an easy way to develop user interfaces for mobile web apps. The current version of the framework is jQuery Mobile Alpha 3. We need to include the following files to use jQuery Mobile. The jQuery Mobile CSS file (jquery.mobile-1.0a3.min.css) The jQuery library (jquery-1.5.min.js) The jQuery Mobile library (jquery.mobile-1.0a3.min.js) Let’s add the required jQuery files directly from jQuery CDN . You can download the files and host them on your own server. jQuery Mobile page structure The basic jQuery Mobile page structure is given below <!DOCTYPE html> <html>   <head>   <title>Page Title</title>   <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a1.min.css" />   <script src="http://code.jquery.com/jquery-1.5.min.js"></script>   <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script> </head> <body> <div data-role="page">   <div data-role="header">     <h1>Page Title</h1>   </div>   <div data-role="content">     <p>Page content goes here.</p>      </div>   <div data-role="footer">     <h4>Page Footer</h4>   </div> </div> </body> </html> The data- attributes are the new feature of HTML5 so that jQuery Mobile will be working on browsers that supporting HTML 5. You can get a detailed browser support details from http://jquerymobile.com/gbs/ . In the Head section we have included the Core jQuery javascript file and jQuery Mobile Library and the core CSS Library for the UI Element Styling. These jQuery files are minified versions and will improve the performance of page load on Mobile Devices. The jQuery Mobile pages are identified with an element with the data-role="page" attribute inside the <body> tag. <div data-role="page"> </div> Within the "page" container, any valid HTML markup can be used, but for typical pages in jQuery Mobile, the immediate children of a "page" are div element with data-roles of "header", "content", and "footer". <div data-role="page">     <div data-role="header">...</div>     <div data-role="content">...</div>     <div data-role="footer">...</div> </div> The div data-role="content" holds the main content of the HTML page and will be used for making user interaction elements. The div data-role="header" is header part of the page and div data-role="footer" is the footer part of the page. Creating Mobile specific pages in the Mobile Area Let’s create Layout page for our Mobile area <!DOCTYPE html> <html> <head>     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.css" />     <script src="http://code.jquery.com/jquery-1.5.min.js"></script>     <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script>     </head>      <body> @RenderBody()    </body> </html> In the Layout page, I have given reference to jQuery Mobile JavaScript files and the CSS file. Let’s add an Index view page Index.chtml @{     ViewBag.Title = "Index"; } <div data-role="page"> <div data-role="header">      <h1>Expense Tracker Mobile</h1> </div> <div data-role="content">   <ul data-role="listview">     <li>@Html.Partial("_LogOnPartial")</li>      <li>@Html.ActionLink("Home", "Index", "Home")</li>      <li>@Html.ActionLink("Category", "Index", "Category")</li>                          <li>@Html.ActionLink("Expense", "Index", "Expense")</li> </ul> </div> <div data-role="footer">           Shiju Varghese | <a href="http://weblogs.asp.net/shijuvarghese">Blog     </a> | <a href="http://twitter.com/shijucv">Twitter</a>   </div> </div>   In the Index page, we have used data-role “listview” for showing our content as List View Let’s create a data entry screen create.cshtml @model MyFinance.Domain.Category @{     ViewBag.Title = "Create Category"; }   <div data-role="page"> <div data-role="header">      <h1>Create Category</h1>             @Html.ActionLink("Home", "Index","Home",null, new { @class = "ui-btn-right" })      </div>       <div data-role="content">     @using (Html.BeginForm("Create","Category",FormMethod.Post))     {       <div data-role="fieldcontain">        @Html.LabelFor(model => model.Name)        @Html.EditorFor(model => model.Name)        <div>           @Html.ValidationMessageFor(m => m.Name)        </div>         </div>         <div data-role="fieldcontain">         @Html.LabelFor(model => model.Description)         @Html.EditorFor(model => model.Description)                   </div>                    <div class="ui-body ui-body-b">         <button type="submit" data-role="button" data-theme="b">Save</button>       </div>     }        </div> </div>   In jQuery Mobile, the form elements should be placed inside the data-role="fieldcontain" The below screen shots show the pages rendered in mobile browser Index Page Create Page Source Code You can download the source code from http://efmvc.codeplex.com   Summary We have created a single  web app for desktop browsers and mobile browsers. If a user access the site from desktop browsers, users will get normal web pages and get mobile specific pages if users access from mobile browsers. If users are accessing the website from mobile devices, we will redirect to a ASP.NET MVC area Mobile. For redirecting to the Mobile area, we have used a Global filer for the redirection logic and used open source framework 51Degrees.mobi for the better support for mobile browser detection. In the Mobile area, we have created the pages using jQuery Mobile and users will get mobile friendly web pages. We can create great mobile web apps using ASP.NET MVC  and jQuery Mobile Framework.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >