Search Results

Search found 6580 results on 264 pages for 'require'.

Page 43/264 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • Best practice ACLs to prepare for auditors?

    - by Nic
    An auditor will be visiting our office soon, and they will require read-only access to our data. I have already created a domain user account and placed them into a group called "Auditors". We have a single fileserver (Windows Server 2008) with about ten shared folders. All of the shares are set up to allow full access to authenticated users, and access restrictions are implemented with NTFS ACL's. Most folders allow full access to the "Domain Users" group, but the auditor won't need to make any changes. It takes several hours to update NTFS ACL's since we have about one million files. Here are the options that I am currently considering. Create a "staff" group to assign read/write instead of "Domain Users" at the share level Create a "staff" group to assign read/write instead of "Domain Users" at the NTFS level Deny access to the "Auditors" group at the share level Deny access to the "Auditors" group at the NTFS level Accept the status quo and trust the auditor. I will probably need to configure similar users in the future, as some of our contractors require a domain account but shouldn't be able to modify our client data. Is there a best practice for this?

    Read the article

  • Bundler isn't loading gems

    - by Garrett
    I have been having a problem with using Bundler and being able to access my gems without having to require them somewhere, as config.gem used to do that for me (as far as I know). In my Rails 3 app, I defined my Gemfile like so: clear_sources source "http://gemcutter.org" source "http://gems.github.com" bundle_path "vendor/bundler_gems" ## Bundle edge rails: git "git://github.com/rails/arel.git" git "git://github.com/rails/rack.git" gem "rails", :git => "git://github.com/rails/rails.git" ## Bundle gem "mongo_mapper", :git => "git://github.com/jnunemaker/mongomapper.git" gem "bluecloth", ">= 2.0.0" Then I run gem bundle, it bundles it all up like expected. Inside the environment.rb file that is included within boot.rb it looks like this: # DO NOT MODIFY THIS FILE module Bundler file = File.expand_path(__FILE__) dir = File.dirname(file) ENV["PATH"] = "#{dir}/../../../../bin:#{ENV["PATH"]}" ENV["RUBYOPT"] = "-r#{file} #{ENV["RUBYOPT"]}" $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/ext") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/lib") @gemfile = "#{dir}/../../../../Gemfile" require "rubygems" unless respond_to?(:gem) # 1.9 already has RubyGems loaded @bundled_specs = {} @bundled_specs["builder"] = eval(File.read("#{dir}/specifications/builder-2.1.2.gemspec")) @bundled_specs["builder"].loaded_from = "#{dir}/specifications/builder-2.1.2.gemspec" @bundled_specs["text-hyphen"] = eval(File.read("#{dir}/specifications/text-hyphen-1.0.0.gemspec")) @bundled_specs["text-hyphen"].loaded_from = "#{dir}/specifications/text-hyphen-1.0.0.gemspec" @bundled_specs["i18n"] = eval(File.read("#{dir}/specifications/i18n-0.3.3.gemspec")) @bundled_specs["i18n"].loaded_from = "#{dir}/specifications/i18n-0.3.3.gemspec" @bundled_specs["arel"] = eval(File.read("#{dir}/specifications/arel-0.2.pre.gemspec")) @bundled_specs["arel"].loaded_from = "#{dir}/specifications/arel-0.2.pre.gemspec" @bundled_specs["activemodel"] = eval(File.read("#{dir}/specifications/activemodel-3.0.pre.gemspec")) @bundled_specs["activemodel"].loaded_from = "#{dir}/specifications/activemodel-3.0.pre.gemspec" @bundled_specs["jnunemaker-validatable"] = eval(File.read("#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec")) @bundled_specs["jnunemaker-validatable"].loaded_from = "#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec" @bundled_specs["abstract"] = eval(File.read("#{dir}/specifications/abstract-1.0.0.gemspec")) @bundled_specs["abstract"].loaded_from = "#{dir}/specifications/abstract-1.0.0.gemspec" @bundled_specs["erubis"] = eval(File.read("#{dir}/specifications/erubis-2.6.5.gemspec")) @bundled_specs["erubis"].loaded_from = "#{dir}/specifications/erubis-2.6.5.gemspec" @bundled_specs["mime-types"] = eval(File.read("#{dir}/specifications/mime-types-1.16.gemspec")) @bundled_specs["mime-types"].loaded_from = "#{dir}/specifications/mime-types-1.16.gemspec" @bundled_specs["mail"] = eval(File.read("#{dir}/specifications/mail-2.1.2.gemspec")) @bundled_specs["mail"].loaded_from = "#{dir}/specifications/mail-2.1.2.gemspec" @bundled_specs["rake"] = eval(File.read("#{dir}/specifications/rake-0.8.7.gemspec")) @bundled_specs["rake"].loaded_from = "#{dir}/specifications/rake-0.8.7.gemspec" @bundled_specs["railties"] = eval(File.read("#{dir}/specifications/railties-3.0.pre.gemspec")) @bundled_specs["railties"].loaded_from = "#{dir}/specifications/railties-3.0.pre.gemspec" @bundled_specs["memcache-client"] = eval(File.read("#{dir}/specifications/memcache-client-1.7.7.gemspec")) @bundled_specs["memcache-client"].loaded_from = "#{dir}/specifications/memcache-client-1.7.7.gemspec" @bundled_specs["rack"] = eval(File.read("#{dir}/specifications/rack-1.1.0.gemspec")) @bundled_specs["rack"].loaded_from = "#{dir}/specifications/rack-1.1.0.gemspec" @bundled_specs["rack-test"] = eval(File.read("#{dir}/specifications/rack-test-0.5.3.gemspec")) @bundled_specs["rack-test"].loaded_from = "#{dir}/specifications/rack-test-0.5.3.gemspec" @bundled_specs["rack-mount"] = eval(File.read("#{dir}/specifications/rack-mount-0.4.5.gemspec")) @bundled_specs["rack-mount"].loaded_from = "#{dir}/specifications/rack-mount-0.4.5.gemspec" @bundled_specs["actionpack"] = eval(File.read("#{dir}/specifications/actionpack-3.0.pre.gemspec")) @bundled_specs["actionpack"].loaded_from = "#{dir}/specifications/actionpack-3.0.pre.gemspec" @bundled_specs["bluecloth"] = eval(File.read("#{dir}/specifications/bluecloth-2.0.7.gemspec")) @bundled_specs["bluecloth"].loaded_from = "#{dir}/specifications/bluecloth-2.0.7.gemspec" @bundled_specs["activerecord"] = eval(File.read("#{dir}/specifications/activerecord-3.0.pre.gemspec")) @bundled_specs["activerecord"].loaded_from = "#{dir}/specifications/activerecord-3.0.pre.gemspec" @bundled_specs["text-format"] = eval(File.read("#{dir}/specifications/text-format-1.0.0.gemspec")) @bundled_specs["text-format"].loaded_from = "#{dir}/specifications/text-format-1.0.0.gemspec" @bundled_specs["actionmailer"] = eval(File.read("#{dir}/specifications/actionmailer-3.0.pre.gemspec")) @bundled_specs["actionmailer"].loaded_from = "#{dir}/specifications/actionmailer-3.0.pre.gemspec" @bundled_specs["tzinfo"] = eval(File.read("#{dir}/specifications/tzinfo-0.3.16.gemspec")) @bundled_specs["tzinfo"].loaded_from = "#{dir}/specifications/tzinfo-0.3.16.gemspec" @bundled_specs["activesupport"] = eval(File.read("#{dir}/specifications/activesupport-3.0.pre.gemspec")) @bundled_specs["activesupport"].loaded_from = "#{dir}/specifications/activesupport-3.0.pre.gemspec" @bundled_specs["activeresource"] = eval(File.read("#{dir}/specifications/activeresource-3.0.pre.gemspec")) @bundled_specs["activeresource"].loaded_from = "#{dir}/specifications/activeresource-3.0.pre.gemspec" @bundled_specs["rails"] = eval(File.read("#{dir}/specifications/rails-3.0.pre.gemspec")) @bundled_specs["rails"].loaded_from = "#{dir}/specifications/rails-3.0.pre.gemspec" @bundled_specs["mongo"] = eval(File.read("#{dir}/specifications/mongo-0.18.2.gemspec")) @bundled_specs["mongo"].loaded_from = "#{dir}/specifications/mongo-0.18.2.gemspec" @bundled_specs["mongo_mapper"] = eval(File.read("#{dir}/specifications/mongo_mapper-0.6.10.gemspec")) @bundled_specs["mongo_mapper"].loaded_from = "#{dir}/specifications/mongo_mapper-0.6.10.gemspec" def self.add_specs_to_loaded_specs Gem.loaded_specs.merge! @bundled_specs end def self.add_specs_to_index @bundled_specs.each do |name, spec| Gem.source_index.add_spec spec end end add_specs_to_loaded_specs add_specs_to_index def self.require_env(env = nil) context = Class.new do def initialize(env) @env = env && env.to_s ; end def method_missing(*) ; yield if block_given? ; end def only(*env) old, @only = @only, _combine_only(env.flatten) yield @only = old end def except(*env) old, @except = @except, _combine_except(env.flatten) yield @except = old end def gem(name, *args) opt = args.last.is_a?(Hash) ? args.pop : {} only = _combine_only(opt[:only] || opt["only"]) except = _combine_except(opt[:except] || opt["except"]) files = opt[:require_as] || opt["require_as"] || name files = [files] unless files.respond_to?(:each) return unless !only || only.any? {|e| e == @env } return if except && except.any? {|e| e == @env } if files = opt[:require_as] || opt["require_as"] files = Array(files) files.each { |f| require f } else begin require name rescue LoadError # Do nothing end end yield if block_given? true end private def _combine_only(only) return @only unless only only = [only].flatten.compact.uniq.map { |o| o.to_s } only &= @only if @only only end def _combine_except(except) return @except unless except except = [except].flatten.compact.uniq.map { |o| o.to_s } except |= @except if @except except end end context.new(env && env.to_s).instance_eval(File.read(@gemfile), @gemfile, 1) end end module Gem @loaded_stacks = Hash.new { |h,k| h[k] = [] } def source_index.refresh! super Bundler.add_specs_to_index end end But when I try to access any of my gems, e.g. MongoMapper, Paperclip, Haml, etc. I get: NameError: uninitialized constant MongoMapper The same goes for any other gem. Does Bundler not include gems like the old Rails 2.0 did? Or is something messed up with my system? Any help would be appreciated, thank you!

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Ruby gem installation error after OSX Yosemite and Xcode 6 installation

    - by Andres Trevino
    I tried installing a gem like I did before installing Yosemite, but now I'm getting an error: /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in `synchronize': ERROR: Failed to build gem native extension. (Gem::Ext::BuildError) ERROR: Failed to build gem native extension. deadlock; recursive locking This is the command I wrote: sudo gem install mysql2 This is the message it appears in the terminal: Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in each' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:inuse_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:in contains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:inblock in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in each' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:intry_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:in rescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:inrequire' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:in load_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:inload_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:in initialize' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:innew' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:in configuration' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:63:inrun' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:38:in block in build' from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tempfile.rb:324:inopen' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:17:in build' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:161:inblock (2 levels) in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:in chdir' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:inblock in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in synchronize' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:inbuild_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:ineach' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:in use_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:incontains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:in block in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:ineach' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:in try_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:inrescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:in require' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:inload_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:in load_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:ininitialize' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:in new' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:indo_configuration' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:39:in run' from /usr/bin/gem:21:in' I am using OSX 10.10 and Xcode 6 Beta. Do any of you guys have any idea as to what to do about this? Thanks in advance

    Read the article

  • Missing a constant on load.. how can i get around this? (Rails::Plugin::OpenID)

    - by Chris Kimpton
    I have a Rails 2 project that I am trying to upgrade to Rails 3, but getting some issues with bundler. When I run "rake", it runs the tests just fine. But when I run "bundle exec rake" it fails to find a constant. The error is this: /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/dependencies.rb:131:in `const_missing': uninitialized constant Rails::Plugin::OpenID (NameError) from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/vendor/plugins/open_id_authentication/init.rb:16:in `evaluate_init_rb' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `evaluate_method' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:166:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `each' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:276:in `run_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `run_prepare_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:631:in `prepare_dispatcher' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:185:in `process' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `run' from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 I have these gems installed: $ gem list *** LOCAL GEMS *** actionmailer (2.3.9) actionpack (2.3.9) activerecord (2.3.9) activeresource (2.3.9) activesupport (2.3.9) authlogic (2.1.3) bundler (1.0.7) gravtastic (2.2.0) linecache (0.43) mocha (0.9.10) newrelic_rpm (2.13.4) parseexcel (0.5.2) rack (1.1.0) rack-openid (1.1.1) rails (2.3.9) rake (0.8.7) ruby-debug-base (0.10.5.jb2, 0.10.4) ruby-debug-ide (0.4.15) ruby-openid (2.1.8, 2.1.7, 2.0.4) sqlite3-ruby (1.3.2) The bundler Gemfile is as follows: source 'http://rubygems.org' #gem 'rails', '3.0.3' gem "rails", "2.3.9" gem "activesupport", "2.3.9" gem "ruby-openid", "2.1.7", :require => "openid" #gem "authlogic-oid", "1.0.4" # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' gem "authlogic", "= 2.1.3" gem "newrelic_rpm" # gem "facebooker" gem "parseexcel" gem 'gravtastic', '= 2.2.0' gem "rack-openid", '=1.1.1', :require => 'rack/openid' # not sure what this does... gem "mocha" I have these plugins installed: 2dc_jqgrid authlogic_openid open_id_authentication squirrel I see these similar questions: Missing a constant on load.. how can i get around this? and Requiring gem in Rails 3 Controller failing with "Constant Missing" But their solutions dont seem to work for my situation. I am guessing the issue is around the plugins, but my ruby-foo is too weak. Thanks in advance, Chris

    Read the article

  • Applying a Dojo Toolkit (Dijit) theme to ASP.NET pages.

    - by mcoolbeth
    In the code below, I am trying to apply a Dijit theme to the controls in my .aspx page. However, the controls persist in their normal, unthemed appearance. Anybody know why? Master Page: <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Main.master.cs" Inherits="WebJournalEntryClient.Main" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>My Web Application</title> <link rel="stylesheet" href="dojoroot/dijit/themes/tundra/tundra.css" /> <script type="text/javascript" src="dojoroot/dojo/dojo.js"/> <script type="text/javascript"> dojo.require("dijit.form.Button"); dojo.require("dijit.form.TextBox"); dojo.require("dijit.form.ComboBox"); </script> </head> <body class = "tundra"> <form id="form1" runat="server"> <div> <div> This is potentially space for a header bar. </div> <table> <tr> <td> Maybe <br /> a <br /> Side <br /> bar. </td> <td> <asp:ContentPlaceHolder ID="CenterPlaceHolder" runat="server"/> </td> </tr> </table> <div> This is potentially space for a footer bar. </div> </div> </form> </body> </html> Content Page: <%@ Page Title="" Language="C#" MasterPageFile="~/Main.Master" AutoEventWireup="true" CodeBehind="LogIn.aspx.cs" Inherits="WebJournalEntryClient.LogIn" %> <asp:Content ID="Content" ContentPlaceHolderID="CenterPlaceHolder" runat="server"> <div> User ID: <asp:TextBox ID = "UserName" dojoType="dijit.form.TextBox" runat="server" /><br /> Password: <asp:TextBox ID = "PassWord" dojoType="dijit.form.TextBox" runat="server" /><br /> <asp:Button ID="LogInButton" Text="Log In" dojoType="dijit.form.Button" runat="server" /> </div> </asp:Content>

    Read the article

  • Rails routing to XML/JSON without views gone mad

    - by John Schulze
    I have a mystifying problem. In a very simple Ruby app i have three classes: Drivers, Jobs and Vehicles. All three classes only consist of Id and Name. All three classes have the same #index and #show methods and only render in JSON or XML (this is in fact true for all their CRUD methods, they are identical in everything but name). There are no views. For example: def index @drivers= Driver.all respond_to do |format| format.js { render :json => @drivers} format.xml { render :xml => @drivers} end end def show @driver = Driver.find(params[:id]) respond_to do |format| format.js { render :json => @driver} format.xml { render :xml => @driver} end end The models are similarly minimalistic and only contain: class Driver< ActiveRecord::Base validates_presence_of :name end In routes.rb I have: map.resources :drivers map.resources :jobs map.resources :vehicles map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' I can perform POST/create, GET/index and PUT/update on all three classes and GET/read used to work as well until I installed the "has many polymorphs" ActiveRecord plugin and added to environment.rb: require File.join(File.dirname(__FILE__), 'boot') require 'has_many_polymorphs' require 'active_support' Now for two of the three classes I cannot do a read any more. If i go to localhost:3000/drivers they all list nicely in XML but if i go to localhost:3000/drivers/3 I get an error: Processing DriversController#show (for 127.0.0.1 at 2009-06-11 20:34:03) [GET] Parameters: {"id"=>"3"} [4;36;1mDriver Load (0.0ms)[0m [0;1mSELECT * FROM "drivers" WHERE ("drivers"."id" = 3) [0m ActionView::MissingTemplate (Missing template drivers/show.erb in view path app/views): app/controllers/drivers_controller.rb:14:in `show' ...etc This is followed a by another unexpected error: Processing ApplicationController#show (for 127.0.0.1 at 2009-06-11 21:35:52)[GET] Parameters: {"id"=>"3"} NameError (uninitialized constant ApplicationController::AreaAccessDenied): ...etc What is going on here? Why does the same code work for one class but not the other two? Why is it trying to do a #view on the ApplicationController? I found that if I create a simple HTML view for each of the three classes these work fine. To each class I add: format.html # show.html.erb With this in place, going to localhost:3000/drivers/3 renders out the item in HTML and I get no errors in the log. But if attach .xml to the URL it again fails for two of the classes (with the same error message as before) while one will output XML as expected. Even stranger, on the two failing classes, when adding .js to the URL (to trigger JSON rendering) I get the HTML output instead! Is it possible this has something to do with the "has many polymorphs" plugin? I have heard of people having routing issues after installing it. Removing "has many polymorphs" and "active support" from environment.rb (and rebooting the sever) seems to make no difference whatsoever. Yet my problems started after it was installed. I've spent a number of hours on this problem now and am starting to get a little desperate, Google turns up virtually no information which makes me suspect I must have missed something elementary. Any enlightenment or hint gratefully received! JS

    Read the article

  • Redis Cookbook Chat Recipe

    - by Tommy Kennedy
    I am a new starter to Node.Js and Redis. I got the Redis cookbook and was trying out the Chat client & Server recipe. I was wondering if anybody got the code to work or if there is some bug in the code. I dont see where the sent messages from the client get invoked on the server. Any help would be great. Regards, Tom Client Code: <?php ?> <html> <head> <title></title> <script src="http://192.168.0.118:8000/socket.io/socket.io.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js"></script> <script> var socket = io.connect('192.168.0.118',{port:8000}); socket.on('message', function(data){ alert(data); //var li = new Element('li').insert(data); //$('messages').insert({top: li}); }); </script> </head> <body> <ul id="messages"> <!-- chat messages go here --> </ul> <form id="chatform" action=""> <input id="chattext" type="text" value="" /> <input type="submit" value="Send" /> </form> <script> $('#chatform').submit(function() { socket.emit('message', 'test'); //$('chattext').val()); $('chattext').val(""); // cleanup the field return false; }); </script> </body> </html> Server Code: var http = require('http'); io = require('socket.io'); redis = require('redis'); rc = redis.createClient(); //rc1 = redis.createClient(); rc.on("connect",function(){ rc.subscribe("chat"); console.log("In Chat Stream"); }); rc.on("message",function (channel,message){ console.log("Sending hope: " + message); //rc1.publish("chat","hope"); socketio.sockets.emit('message',message); }); server = http.createServer(function(req,res){ res.writeHead(200,{'content-type':'text/html'}); res.end('<h1>hello world</h1>'); }); server.listen(8000); var socketio = io.listen(server);

    Read the article

  • Facebook iFrame APP not working in IE, works on every other browser

    - by Sean Ashmore
    So im getting a blank page when loading this page within an iFrame on Internet explorer, every other browser works fine.. I have also tried using p3p headers as other people have suggested, but to no avail. <?php require ("connect.php"); require ("config.php"); require ("fb_config.php"); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Login handler</title> <meta http-equiv="content-type" content="text/html; charset=iso-8859-1"> <link rel="stylesheet" href="css/login.css" type="text/css"> </head> <body> <?//=$user?> <?php if($user == 0) { echo "You are not logged into facebook. Nice try."; }else{ $query = "SELECT id,fb_id,login_ip,login_count,activated,sitestate FROM login WHERE fb_id='".mysql_real_escape_string($user)."'"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_array($result); if (mysql_num_rows($result) == 0) { $sql = "INSERT INTO login SET id = '', fb_id ='" .mysql_real_escape_string($user). "', name = '" .rand(10000000000000000,99999999999999999999). "', signup =NOW() , password = '" .mysql_real_escape_string($pass). "', state = '0', mail = '" .mysql_real_escape_string($_POST['mail']). "',location='".mysql_real_escape_string($randomlocation)."',location_start='".mysql_real_escape_string($randomlocation)."', signup_ip='".mysql_real_escape_string($_SERVER['REMOTE_ADDR'])."',ref='".mysql_real_escape_string($_POST['ref'])."', activation_id = '" .mysql_real_escape_string($activation_link). "',activated='2', killprotection = '$twodayprot',gender='" .mysql_real_escape_string($_POST["gender"]). "'"; $res = mysql_query($sql); } //if($row['fb_id'] != $user){ //echo "Your facebook ID: $user is NOT in the MW DB."; //exit(); //}else{ if(empty($row['login_ip'])){ $row['login_ip'] = $_SERVER['REMOTE_ADDR']; }else{ $ip_information = explode("-", $row['login_ip']); if (in_array($_SERVER['REMOTE_ADDR'], $ip_information)) { $row['login_ip'] = $row['login_ip']; }else{ $row['login_ip'] = $row['login_ip']."-".$_SERVER['REMOTE_ADDR']; } } $update_login = mysql_query("UPDATE login SET login_count=login_count+'1' WHERE name='".mysql_real_escape_string($_POST['username'])."'") or die(mysql_error()); $_SESSION['user_id'] = $row['id']; $result = mysql_query("UPDATE login SET userip='".mysql_real_escape_string($_SERVER['REMOTE_ADDR'])."',login_ip='".mysql_real_escape_string($row['login_ip'])."',login_count='0' WHERE id='".mysql_real_escape_string($_SESSION['user_id'])."'") or die(mysql_error()); if ($row['sitestate'] == 0){ header("location: home.php"); } elseif ($row['sitestate'] == 2) { header("location: killed.php?id={$row['id']}&encrypted={$row['password']}"); } else { header("location: banned.php?id={$row['id']}&encrypted={$row['password']}"); } }// id check. ?> </body> </html>

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • CodePlex Daily Summary for Friday, June 29, 2012

    CodePlex Daily Summary for Friday, June 29, 2012Popular ReleasesSupporting Guidance and Whitepapers: v1 - Supporting Media: Welcome to the Release Candidate (RC) release of the ALM Rangers Readiness supporting edia As this is a RC release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these RC artifacts in a production environment. Quality-Bar Details All critical bugs have been resolved Known Issues / Bugs Practical Ruck training workshop not yet includedDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewDataBooster - Extension to ADO.NET Data Provider: DataBooster Library for Oracle + SQL Server Beta2: This is a derivative library of dbParallel project http://dbparallel.codeplex.com. All above binaries releases require .NET Framework 4.0 or later. SQL Server support is always build-in (can't be unplugged). The first download (DLL) also requires ODP.NET to connect Oracle; The second download (DLL) also requires DataDirect(3.5) to connect Oracle; The third download (DLL) doesn't support Oracle. Please download the source code if the provider need to be replaced by others. For example ODP.NE...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.57: Fix for issue #18284: evaluating literal expressions in the pattern c1 * (x / c2) where c1/c2 is an integer value (as opposed to c2/c1 being the integer) caused the expression to be destroyed.Visual Studio ALM Quick Reference Guidance: v2 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Masashi Fujiwara (?? ??), localized and reviewed the Quick Reference Guidance for the Japanese communities, based on http://vsarquickguide.codeplex.com/releases/view/52402. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...Visual Studio Team Foundation Server Branching and Merging Guide: v1 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Hirokazu Higashino (?? ??), localized and reviewed the Branching Guidance for the Japanese communities, based on http://vsarbranchingguide.codeplex.com/releases/view/38849. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...SQL Server FineBuild: Version 3.1.0: Top SQL Server FineBuild Version 3.1.0This is the stable version of FineBuild for SQL Server 2012, 2008 R2, 2008 and 2005 Documentation FineBuild Wiki containing details of the FineBuild process Known Issues Limitations with this release FineBuild V3.1.0 Release Contents List of changes included in this release Please DonateFineBuild is free, but please donate what you think FineBuild is worth as everything goes to charity. Tearfund is one of the UK's leading relief and de...EasySL: RapidSL V2: Rewrite RapidSL UI Framework, Using Silverlight 5.0 EF4.1 Code First Ria Service SP2 + Lastest Silverlight Toolkit.NETDeob0: NETDeob 0.1.1: http://i.imgur.com/54C78.pngSOLID by example: All examples: All solid examplesSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1726.406): Use of new version of connection controls for a full support of OSDP authentication mechanism for CRM Online.Umbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.2: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesConfuser: Confuser 1.9: Change log: * Stable output (i.e. given the same seed & input assemblies, you'll get the same output assemblies) + Generate debug symbols, now it is possible to debug the output under a debugger! (Of course without enabling anti debug) + Generating obfuscation database, most of the obfuscation data in stored in it. + Two tools utilizing the obfuscation database (Database viewer & Stack trace decoder) * Change the protection scheme -----Please read Bug Report before you report a bug-----...XDesigner.Development: First release: First releaseNew ProjectsArcGIS Server Rest Catalog: jquery widget that display all MapServer service of ESRI ArcGIS Server in accordion control. Require jQuery and JsRender. BugsBox.Lib: BugsBoxlibCodeStudy: The project includes all my code written for .NET studyCommunity Tfs Team Tools: Community TFS Team Tools is a community project based on the example code from ALM Rangers - Quick Response Sample Command line utility to manage TFS TeamsDiveBaseManager: DiveBaseManagerEclipse App: Aplicacion para android para el envio de coordenaads a un servidorJQMdotNET: JQMdotNet is an early attempt to make a series of MVC HTML helpers to quickly render JQuery Mobile pages.MathBuilderFramework: Math Builder FX is a framework for math problems. The idea of this framework is to create real exercises of mathematics throught base classes.MDS MODELING WORKBOOK: MDS Modeling Workbook is a modeling tool and a solution accelerator for Microsoft Master Data Services. Minesweeper: a clean old minesweeperMultithreaded Port Scanner Utility: Mulithreaded Port Scanner Utility is a very simple port scanner that take advantage of the new System.Threading.Task namespace in .Net 4.0Nonocast.Data: Nonocast.Data is a free, open source developer focused object persistence for small and medium software.On{X} Scripts: This project is to share scripts I create or modify from available free/opensource scripts (off-course with due mentions & links to original developer).PCS MAP: ladPowerShell Module for Mayhem: A module for Mayhem with a reaction which executes a PowerShell script.PunkBuster™ Screenshot Viewer: PunkBuster™ Screenshot Viewer shows screenshots of all games protected with PunkBuster™.Random reminder: Objective: To create a simple Windows application that can schedule and automatically reschedule a random timer that falls within a defined interval.Remembrall: Some useful stuff (at least for me) ! Javascript plugins jQuery plugins C# utilities ASP.NET MVC helper extensionsrepotfs: Repositório de exemplosSharepoint Designer: Sharepoint ExplororSPFluid: Simple modularity and data access framework.SPManager: Helper classSQL Server : xp_fixeddrives2: C++ Extended Stored Procedure Get volumes InfoSWORD Extensions for Sharepoint: Push documents in and out of Sharepoint using the SWORD protocol. ***incomplete***Test projects: My testing projecttestdd06282012git01: xzc testdd06282012git1: sadtestdd06282012tfs01: xzctestddhg06282012: xctestddtfs062820121: ,lm;.testhg06282012dd1: zxTlrk: NATotal_Mayhem: Add-on for Mayhem ( http://mayhem.codeplex.com/ ) that includes networking events, power reactions, and more.WallSwitch: An application to cycle your desktop wallpaper.??????: ????????

    Read the article

  • [GEEK SCHOOL] Network Security 2: Preventing Disaster with User Account Control

    - by Ciprian Rusen
    In this second lesson in our How-To Geek School about securing the Windows devices in your network, we will talk about User Account Control (UAC). Users encounter this feature each time they need to install desktop applications in Windows, when some applications need administrator permissions in order to work and when they have to change different system settings and files. UAC was introduced in Windows Vista as part of Microsoft’s “Trustworthy Computing” initiative. Basically, UAC is meant to act as a wedge between you and installing applications or making system changes. When you attempt to do either of these actions, UAC will pop up and interrupt you. You may either have to confirm you know what you’re doing, or even enter an administrator password if you don’t have those rights. Some users find UAC annoying and choose to disable it but this very important security feature of Windows (and we strongly caution against doing that). That’s why in this lesson, we will carefully explain what UAC is and everything it does. As you will see, this feature has an important role in keeping Windows safe from all kinds of security problems. In this lesson you will learn which activities may trigger a UAC prompt asking for permissions and how UAC can be set so that it strikes the best balance between usability and security. You will also learn what kind of information you can find in each UAC prompt. Last but not least, you will learn why you should never turn off this feature of Windows. By the time we’re done today, we think you will have a newly found appreciation for UAC, and will be able to find a happy medium between turning it off completely and letting it annoy you to distraction. What is UAC and How Does it Work? UAC or User Account Control is a security feature that helps prevent unauthorized system changes to your Windows computer or device. These changes can be made by users, applications, and sadly, malware (which is the biggest reason why UAC exists in the first place). When an important system change is initiated, Windows displays a UAC prompt asking for your permission to make the change. If you don’t give your approval, the change is not made. In Windows, you will encounter UAC prompts mostly when working with desktop applications that require administrative permissions. For example, in order to install an application, the installer (generally a setup.exe file) asks Windows for administrative permissions. UAC initiates an elevation prompt like the one shown earlier asking you whether it is okay to elevate permissions or not. If you say “Yes”, the installer starts as administrator and it is able to make the necessary system changes in order to install the application correctly. When the installer is closed, its administrator privileges are gone. If you run it again, the UAC prompt is shown again because your previous approval is not remembered. If you say “No”, the installer is not allowed to run and no system changes are made. If a system change is initiated from a user account that is not an administrator, e.g. the Guest account, the UAC prompt will also ask for the administrator password in order to give the necessary permissions. Without this password, the change won’t be made. Which Activities Trigger a UAC Prompt? There are many types of activities that may trigger a UAC prompt: Running a desktop application as an administrator Making changes to settings and files in the Windows and Program Files folders Installing or removing drivers and desktop applications Installing ActiveX controls Changing settings to Windows features like the Windows Firewall, UAC, Windows Update, Windows Defender, and others Adding, modifying, or removing user accounts Configuring Parental Controls in Windows 7 or Family Safety in Windows 8.x Running the Task Scheduler Restoring backed-up system files Viewing or changing the folders and files of another user account Changing the system date and time You will encounter UAC prompts during some or all of these activities, depending on how UAC is set on your Windows device. If this security feature is turned off, any user account or desktop application can make any of these changes without a prompt asking for permissions. In this scenario, the different forms of malware existing on the Internet will also have a higher chance of infecting and taking control of your system. In Windows 8.x operating systems you will never see a UAC prompt when working with apps from the Windows Store. That’s because these apps, by design, are not allowed to modify any system settings or files. You will encounter UAC prompts only when working with desktop programs. What You Can Learn from a UAC Prompt? When you see a UAC prompt on the screen, take time to read the information displayed so that you get a better understanding of what is going on. Each prompt first tells you the name of the program that wants to make system changes to your device, then you can see the verified publisher of that program. Dodgy software tends not to display this information and instead of a real company name, you will see an entry that says “Unknown”. If you have downloaded that program from a less than trustworthy source, then it might be better to select “No” in the UAC prompt. The prompt also shares the origin of the file that’s trying to make these changes. In most cases the file origin is “Hard drive on this computer”. You can learn more by pressing “Show details”. You will see an additional entry named “Program location” where you can see the physical location on your hard drive, for the file that’s trying to perform system changes. Make your choice based on the trust you have in the program you are trying to run and its publisher. If a less-known file from a suspicious location is requesting a UAC prompt, then you should seriously consider pressing “No”. What’s Different About Each UAC Level? Windows 7 and Windows 8.x have four UAC levels: Always notify – when this level is used, you are notified before desktop applications make changes that require administrator permissions or before you or another user account changes Windows settings like the ones mentioned earlier. When the UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This is the most secure and also the most annoying way to set UAC because it triggers the most UAC prompts. Notify me only when programs/apps try to make changes to my computer (default) – Windows uses this as the default for UAC. When this level is used, you are notified before desktop applications make changes that require administrator permissions. If you are making system changes, UAC doesn’t show any prompts and it automatically gives you the necessary permissions for making the changes you desire. When a UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This level is slightly less secure than the previous one because malicious programs can be created for simulating the keystrokes or mouse moves of a user and change system settings for you. If you have a good security solution in place, this scenario should never occur. Notify me only when programs/apps try to make changes to my computer (do not dim my desktop) – this level is different from the previous in in the fact that, when the UAC prompt is shown, the desktop is not dimmed. This decreases the security of your system because different kinds of desktop applications (including malware) might be able to interfere with the UAC prompt and approve changes that you might not want to be performed. Never notify – this level is the equivalent of turning off UAC. When using it, you have no protection against unauthorized system changes. Any desktop application and any user account can make system changes without your permission. How to Configure UAC If you would like to change the UAC level used by Windows, open the Control Panel, then go to “System and Security” and select “Action Center”. On the column on the left you will see an entry that says “Change User Account Control settings”. The “User Account Control Settings” window is now opened. Change the position of the UAC slider to the level you want applied then press “OK”. Depending on how UAC was initially set, you may receive a UAC prompt requiring you to confirm this change. Why You Should Never Turn Off UAC If you want to keep the security of your system at decent levels, you should never turn off UAC. When you disable it, everything and everyone can make system changes without your consent. This makes it easier for all kinds of malware to infect and take control of your system. It doesn’t matter whether you have a security suite or antivirus installed or third-party antivirus, basic common-sense measures like having UAC turned on make a big difference in keeping your devices safe from harm. We have noticed that some users disable UAC prior to setting up their Windows devices and installing third-party software on them. They keep it disabled while installing all the software they will use and enable it when done installing everything, so that they don’t have to deal with so many UAC prompts. Unfortunately this causes problems with some desktop applications. They may fail to work after you enable UAC. This happens because, when UAC is disabled, the virtualization techniques UAC uses for your applications are inactive. This means that certain user settings and files are installed in a different place and when you turn on UAC, applications stop working because they should be placed elsewhere. Therefore, whatever you do, do not turn off UAC completely! Coming up next … In the next lesson you will learn about Windows Defender, what this tool can do in Windows 7 and Windows 8.x, what’s different about it in these operating systems and how it can be used to increase the security of your system.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >