Search Results

Search found 12796 results on 512 pages for 'password hash'.

Page 208/512 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Rails: (Devise) Two different methods for new users?

    - by neezer
    I have a Rails 3 app with authentication setup using Devise with the registerable module enabled. I want to have new users who sign up using our outside register form to use the full Devise registerable module, which is happening now. However, I also want the admin user to be able to create new users directly, bypassing (I think) Devise's registerable module. With registerable disabled, my standard UsersController works as I want it to for the admin user, just like any other Rail scaffold. However, now new users can't register on their own. With registerable enabled, my standard UsersController is never called for the new user action (calling Devise::RegistrationsController instead), and my CRUD actions don't seem to work at all (I get dumped back onto my root page with no new user created and no flash message). Here's the log from the request: Started POST "/users" for 127.0.0.1 at 2010-12-20 11:49:31 -0500 Processing by Devise::RegistrationsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"18697r4syNNWHfMTkDCwcDYphjos+68rPFsaYKVjo8Y=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "role"=>"manager"}, "commit"=>"Create User"} SQL (0.9ms) ... User Load (0.6ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 2) LIMIT 1 SQL (0.9ms) ... Redirected to http://test-app.local/ Completed 302 Found in 192ms ... but I am able to register new users through the outside form. How can I get both of these methods to work together, such that my admin user can manually create new users and guest users can register on their own? I have my Users controller setup for standard CRUD: class UsersController < ApplicationController load_and_authorize_resource def index @users = User.where("id NOT IN (?)", current_user.id) # don't display the current user in the users list; go to account management to edit current user details end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = "#{ @user.email } created." redirect_to users_path else render :action => 'new' end end def edit end def update params[:user].delete(:password) if params[:user][:password].blank? params[:user].delete(:password_confirmation) if params[:user][:password].blank? and params[:user][:password_confirmation].blank? if @user.update_attributes(params[:user]) flash[:notice] = "Successfully updated User." redirect_to users_path else render :action => 'edit' end end def delete end def destroy redirect_to users_path and return if params[:cancel] if @user.destroy flash[:notice] = "#{ @user.email } deleted." redirect_to users_path end end end And my routes setup as follows: TestApp::Application.routes.draw do devise_for :users devise_scope :user do get "/login", :to => "devise/sessions#new", :as => :new_user_session get "/logout", :to => "devise/sessions#destroy", :as => :destroy_user_session end resources :users do get :delete, :on => :member end authenticate :user do root :to => "application#index" end root :to => "devise/session#new" end

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • Form submission info showing up in URL and not working

    - by kcurtin
    I am making a Rails 3.1 app and have a signup form that was working fine, but I seemed to have changed something to break it.. I'm using Twitter bootstrap and twitter_bootstrap_form_for gem. I made some change that messed with the formatting of the form fields, but more importantly, when I submit the Sign Up form to create a new User, the information is showing up in the URL and looks like this: EDIT: This is happening in the latest versions of Chrome and Firefox http://localhost:3000/?utf8=%E2%9C%93&authenticity_token=UaKG5Y8fuPul2Klx7e2LtdPLTRepBxDM3Zdy8S%2F52W4%3D&user%5Bemail%5D=kevinc%40example.com&user%5Bpassword%5D=testing&user%5Bpassword_confirmation%5D=testing&commit=Sign+Up Here is the code for the form: <div class="span7"> <h3 class="center" id="more">Sign Up Now!</h3> <%= twitter_bootstrap_form_for @user do |user| %> <%= user.email_field :email, :placeholder => '[email protected]' %> <%= user.password_field :password %> <%= user.password_field :password_confirmation, 'Confirm Password' %> <%= user.actions do %> <%= user.submit 'Sign Up' %> <% end %> <% end %> </div> Here is the code for the UsersController: class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(params[:user]) if @user.save redirect_to about_path, :notice => "Signed up!" else render 'new' end end end Not sure if there is more you need but if so let me know! Thank you! Edit: For debugging I tried specifying :post and also using a plain form_for <%= form_for(@user, :method => :post) do |f| %> <div class="field"> <%= f.label :email %> <%= f.email_field :email %> </div> <div class="field"> <%= f.label :password %> <%= f.password_field :password %> </div> <div class="field"> <%= f.label :password_confirmation %> <%= f.password_field :password_confirmation %> </div> <div class="actions"><%= f.submit "Sign Up" %></div> <% end %> This gives me the same problem as above. Adding routes.rb: Auth31::Application.routes.draw do get "home" => "pages#home" get "about" => "pages#about" get "contact" => "pages#contact" get "help" => "pages#help" get "login" => "sessions#new", :as => "login" get "logout" => "sessions#destroy", :as => "logout" get "signup" => "users#new", :as => "signup" root :to => "pages#home" resources :pages resources :users resources :sessions resources :password_resets end

    Read the article

  • Implementation of ZipCrypto / Zip 2.0 encryption in java

    - by gomesla
    I'm trying o implement the zipcrypto / zip 2.0 encryption algoritm to deal with encrypted zip files as discussed in http://www.pkware.com/documents/casestudies/APPNOTE.TXT I believe I've followed the specs but just can't seem to get it working. I'm fairly sure the issue has to do with my interpretation of the crc algorithm. The documentation states CRC-32: (4 bytes) The CRC-32 algorithm was generously contributed by David Schwaderer and can be found in his excellent book "C Programmers Guide to NetBIOS" published by Howard W. Sams & Co. Inc. The 'magic number' for the CRC is 0xdebb20e3. The proper CRC pre and post conditioning is used, meaning that the CRC register is pre-conditioned with all ones (a starting value of 0xffffffff) and the value is post-conditioned by taking the one's complement of the CRC residual. Here is the snippet that I'm using for the crc32 public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } Below is my complete code. Can anyone see what I'm doing wrong. package org.apache.commons.compress.archivers.zip; import java.io.IOException; import java.io.InputStream; public class ZipCryptoInputStream extends InputStream { public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } private static final long ENCRYPTION_KEY_1 = 0x12345678; private static final long ENCRYPTION_KEY_2 = 0x23456789; private static final long ENCRYPTION_KEY_3 = 0x34567890; private InputStream baseInputStream = null; private final PKZIPCRC32 checksumEngine = new PKZIPCRC32(); private long[] keys = null; public ZipCryptoInputStream(ZipArchiveEntry zipEntry, InputStream inputStream, String passwd) throws Exception { baseInputStream = inputStream; // Decryption // ---------- // PKZIP encrypts the compressed data stream. Encrypted files must // be decrypted before they can be extracted. // // Each encrypted file has an extra 12 bytes stored at the start of // the data area defining the encryption header for that file. The // encryption header is originally set to random values, and then // itself encrypted, using three, 32-bit keys. The key values are // initialized using the supplied encryption password. After each byte // is encrypted, the keys are then updated using pseudo-random number // generation techniques in combination with the same CRC-32 algorithm // used in PKZIP and described elsewhere in this document. // // The following is the basic steps required to decrypt a file: // // 1) Initialize the three 32-bit keys with the password. // 2) Read and decrypt the 12-byte encryption header, further // initializing the encryption keys. // 3) Read and decrypt the compressed data stream using the // encryption keys. // Step 1 - Initializing the encryption keys // ----------------------------------------- // // Key(0) <- 305419896 // Key(1) <- 591751049 // Key(2) <- 878082192 // // loop for i <- 0 to length(password)-1 // update_keys(password(i)) // end loop // // Where update_keys() is defined as: // // update_keys(char): // Key(0) <- crc32(key(0),char) // Key(1) <- Key(1) + (Key(0) & 000000ffH) // Key(1) <- Key(1) * 134775813 + 1 // Key(2) <- crc32(key(2),key(1) >> 24) // end update_keys // // Where crc32(old_crc,char) is a routine that given a CRC value and a // character, returns an updated CRC value after applying the CRC-32 // algorithm described elsewhere in this document. keys = new long[] { ENCRYPTION_KEY_1, ENCRYPTION_KEY_2, ENCRYPTION_KEY_3 }; for (int i = 0; i < passwd.length(); ++i) { update_keys((byte) passwd.charAt(i)); } // Step 2 - Decrypting the encryption header // ----------------------------------------- // // The purpose of this step is to further initialize the encryption // keys, based on random data, to render a plaintext attack on the // data ineffective. // // Read the 12-byte encryption header into Buffer, in locations // Buffer(0) thru Buffer(11). // // loop for i <- 0 to 11 // C <- buffer(i) ^ decrypt_byte() // update_keys(C) // buffer(i) <- C // end loop // // Where decrypt_byte() is defined as: // // unsigned char decrypt_byte() // local unsigned short temp // temp <- Key(2) | 2 // decrypt_byte <- (temp * (temp ^ 1)) >> 8 // end decrypt_byte // // After the header is decrypted, the last 1 or 2 bytes in Buffer // should be the high-order word/byte of the CRC for the file being // decrypted, stored in Intel low-byte/high-byte order. Versions of // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. byte[] encryptionHeader = new byte[12]; baseInputStream.read(encryptionHeader); for (int i = 0; i < encryptionHeader.length; i++) { encryptionHeader[i] ^= decrypt_byte(); update_keys(encryptionHeader[i]); } } protected byte decrypt_byte() { byte temp = (byte) (keys[2] | 2); return (byte) ((temp * (temp ^ 1)) >> 8); } @Override public int read() throws IOException { // // Step 3 - Decrypting the compressed data stream // ---------------------------------------------- // // The compressed data stream can be decrypted as follows: // // loop until done // read a character into C // Temp <- C ^ decrypt_byte() // update_keys(temp) // output Temp // end loop int read = baseInputStream.read(); read ^= decrypt_byte(); update_keys((byte) read); return read; } private final void update_keys(byte ch) { keys[0] = checksumEngine.crc32((int) keys[0], ch); keys[1] = keys[1] + (byte) keys[0]; keys[1] = keys[1] * 134775813 + 1; keys[2] = checksumEngine.crc32((int) keys[2], (byte) (keys[1] >> 24)); } }

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Add Hotmail & Live Email Accounts to Outlook 2010

    - by Matthew Guay
    Microsoft has recently been promoting upcoming updates to their Hotmail service, promising to make it an even better webmail service. But Microsoft’s revamped Outlook 2010 is already here. Here’s how to integrate Hotmail with Outlook. Outlook 2010 works with a wide variety of email accounts, including POP3, IMAP, and Exchange accounts.  The only problem with POP3 and IMAP accounts is that they only sync email, but not your calendar and contacts like Exchange does.  Hotmail, however, lets you sync your email, contacts, and calendar with Outlook with the Hotmail Connector.  This lets you keep all of your PIM data accessible from everywhere.  Let’s look at how we can set this up on our account. Getting Started The easiest way to add Hotmail to Outlook is to first install the Outlook Hotmail Connector (link below).  Make sure Outlook is closed first, and then proceed with the installation as usual. If you enter your Hotmail account into the New Account setup in Outlook before installing the Hotmail Connector, Outlook will prompt you to download the Hotmail Connector.  However, you’ll have to exit Outlook before you can install the Connector, and then will have to re-enter your information when you restart Outlook, so it’s easier to just install it first. Add Your Hotmail Account to Outlook Now you’re ready to add your Hotmail account to Outlook.  If this is the first time you’ve run Outlook 2010, you’ll be greeted with the following screen.  Click Next to proceed with setup. Then select Yes and click Next again. If you’ve already got an email account setup in Outlook, you can add a new account by clicking File and then selecting Add account. Now, enter your Hotmail account information, and click Next. Outlook will search for your account settings and automatically setup your account with the Hotmail connector we previously installed. If you entered your password incorrectly previously, you may see the following popup.  Re-enter your password and click OK, and Outlook will re-verify your settings. Once everything’s finished and setup, you’ll see the following completion screen.  Click Finish to complete the setup and check out your Hotmail in Outlook. Welcome to your Hotmail account in Outlook 2010.  You’ll notice a small notification at the bottom of the window notifying you that you’re connected to Windows Live Hotmail.  Now your email will synchronize with your Hotmail account, and your Outlook calendar and contacts will be synced with your Live calendar and contacts, respectively.  This is the closest you can get to full Exchange without an Exchange account, and in our experience it works great.  In fact, Hotmail Sync seems to work faster than IMAP sync for us. Setup Hotmail With POP3 Access If you need to access your Hotmail email account but don’t want to install the Outlook Connector, then you can add it with POP3 sync.  We recommend going with the Outlook Connector for the best experience, but if you can’t install it (eg. you’re not allowed to install applications on your work PC) then this is a good alternative. To do this, follow our tutorial on setting up a Gmail POP3 account in Outlook. Although the article concentrates on Gmail, the settings are essentially the same. The only thing you’ll want to change is the Incoming and Outgoing mail server. Incoming mail server – pop3.live.com Outgoing mail server – smtp.live.com User name – your Hotmail or Live email address Incoming Server (POP3) – 995 Outgoing Server (SMTP) – 587 Also, check This server requires and encrypted connection Just as in the Gmail example, select TLS for the type of encrypted connection.  Then, on the bottom, make sure to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from your Hotmail account online. Conclusion Even though Hotmail is generally not as popular as Gmail, it works great with Outlook integration.  If you’re a heavy user of Windows Live services, or want to try them out, Outlook Connector is the easiest way to keep your desktop activity synced with the cloud.  If you’re just one of the millions of Hotmail users who want to access their old Hotmail account alongside their other accounts, this method works great for you too. If you’re using Outlook 2003 or 2007, check out our article on using Hotmail from Microsoft Outlook. Links Download Outlook Hotmail Connector 32-bit Download Outlook Hotmail Connector 64-bit – note, only for users of Office 2010 x64 Similar Articles Productive Geek Tips Use Hotmail from Microsoft OutlookHow to add any POP3 Email Account to HotmailHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailManage Your Windows Live Account in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • Postfix log.... spam attempt?

    - by luri
    I have some weird entries in my mail.log. What I'd like to ask is if postfix is avoiding correctly (according with the main.cf attached below) what seems to be relay attempts, presumably for spamming, or if I can enhance it's security somehow. Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: connect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: warning: non-SMTP command from catv-80-99-46-143.catv.broadband.hu[80.99.46.143]: GET / HTTP/1.1 Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: disconnect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection rate 1/60s for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection count 1 for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max cache size 1 at Feb 2 11:53:25 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: connect from vs148181.vserver.de[62.75.148.181] Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: warning: non-SMTP command from vs148181.vserver.de[62.75.148.181]: GET / HTTP/1.1 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: disconnect from vs148181.vserver.de[62.75.148.181] Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection rate 1/60s for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection count 1 for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max cache size 1 at Feb 2 12:09:19 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: connect from unknown[202.46.129.123] Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: warning: non-SMTP command from unknown[202.46.129.123]: GET / HTTP/1.1 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: disconnect from unknown[202.46.129.123] Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection rate 1/60s for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection count 1 for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max cache size 1 at Feb 2 14:17:02 Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: warning: 95.110.224.230: hostname host230-224-110-95.serverdedicati.aruba.it verification failed: Name or service not known Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: connect from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: lost connection after CONNECT from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: disconnect from unknown[95.110.224.230] Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection rate 1/60s for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection count 1 for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max cache size 1 at Feb 2 20:57:33 Feb 2 21:13:44 MYSERVER pop3d: Connection, ip=[::ffff:219.94.190.222] Feb 2 21:13:44 MYSERVER pop3d: LOGIN FAILED, user=admin, ip=[::ffff:219.94.190.222] Feb 2 21:13:50 MYSERVER pop3d: LOGIN FAILED, user=test, ip=[::ffff:219.94.190.222] Feb 2 21:13:56 MYSERVER pop3d: LOGIN FAILED, user=danny, ip=[::ffff:219.94.190.222] Feb 2 21:14:01 MYSERVER pop3d: LOGIN FAILED, user=sharon, ip=[::ffff:219.94.190.222] Feb 2 21:14:07 MYSERVER pop3d: LOGIN FAILED, user=aron, ip=[::ffff:219.94.190.222] Feb 2 21:14:12 MYSERVER pop3d: LOGIN FAILED, user=alex, ip=[::ffff:219.94.190.222] Feb 2 21:14:18 MYSERVER pop3d: LOGIN FAILED, user=brett, ip=[::ffff:219.94.190.222] Feb 2 21:14:24 MYSERVER pop3d: LOGIN FAILED, user=mike, ip=[::ffff:219.94.190.222] Feb 2 21:14:29 MYSERVER pop3d: LOGIN FAILED, user=alan, ip=[::ffff:219.94.190.222] Feb 2 21:14:35 MYSERVER pop3d: LOGIN FAILED, user=info, ip=[::ffff:219.94.190.222] Feb 2 21:14:41 MYSERVER pop3d: LOGIN FAILED, user=shop, ip=[::ffff:219.94.190.222] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: warning: 71.6.142.196: hostname db4142196.aspadmin.net verification failed: Name or service not known Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: connect from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: lost connection after CONNECT from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: disconnect from unknown[71.6.142.196] Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection rate 1/60s for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection count 1 for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max cache size 1 at Feb 3 06:49:29 I have Postfix 2.7.1-1 running on Ubuntu 10.10. This is my (modified por privacy) main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key myhostname = mymailserver.org alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mymailserver.org, MYSERVER, localhost relayhost = mynetworks = 127.0.0.0/8, 192.168.1.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all home_mailbox = Maildir/ smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination mailbox_command = smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom smtp_tls_security_level = may

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 19 (sys.dm_exec_query_stats)

    - by Tamarick Hill
    The sys.dm_exec_query_stats DMV is one of the most useful DMV’s out there when it comes to performance tuning. If you have been keeping up with this blog series this month, you know that I started out on Day 1 reviewing many of the DMV’s within the ‘exec’ namespace. I’m not sure how I missed this one considering how valuable it is, but hey, they say it’s better late than never right?? On Day 7 and Day 8 we reviewed the sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats respectively. This sys.dm_exec_query_stats DMV is very similar to these two. As a matter of fact, this DMV will return all of the information you saw in the other two DMV’s, but in addition to that, you can see stats for all queries that have cached execution plans on your server. You can even see stats for statements that are ran Ad-Hoc as long as they are still cached in the buffer pool. To better illustrate this DMV, let have a quick look at it: SELECT * FROM sys.dm_exec_query_stats As you can see, there is a lot of information returned from this DMV. I wont go into detail about each and every one of these columns, but I will touch on a few of them briefly. The first column is the ‘sql_handle’, which if you remember from Day 4 of our blog series, I explained how you can use this column to extract the actual SQL text that was executed. The next columns statement_start_offset and statement_end_offset provide you a way of extracting the exact SQL statement that was executed as part of a batch. The plan_handle column is used to extract the Execution plan that was used, which we talked about during Day 5 of this blog series. Later in the result set, you have columns to identify how many times a particular statement was executed, how much CPU time it used, how many reads/writes it performed, the duration, how many rows were returned, etc. These columns provide you with a solid avenue to begin your performance optimization. The last column I will touch on is the query_plan_hash column. A lot of times when you have Dynamic SQL running on your server, you have similar statements with different parameter values being passed in. Many times these types of statements will get similar execution plans and then a Binary hash value can be generated based on these similar plans. This query plan hash can be used to find the cost of all queries that have similar execution plans and then you can tune based on that plan to improve the performance of all of the individual queries. This is a very powerful way of identifying and tuning Ad-hoc statements that run on your server. As I stated earlier, this sys.dm_exec_query_stats DMV is a very powerful and recommended DMV for performance tuning. You are able to quickly identify statements that are running on your server and analyze their impact on system resources. Using this DMV to track down the biggest performance killers on your server will allow you to make the biggest gains once you focus your tuning efforts on those top offenders. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms189741.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • Most Innovative IDM Projects: Awards at OpenWorld

    - by Tanu Sood
    On Tuesday at Oracle OpenWorld 2012, Oracle recognized the winners of Innovation Awards 2012 at a ceremony presided over by Hasan Rizvi, Executive Vice President at Oracle. Oracle Fusion Middleware Innovation Awards recognize customers for achieving significant business value through innovative uses of Oracle Fusion Middleware offerings. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s Award honors customers for their cutting-edge solutions driving business innovation and IT modernization using Oracle Fusion Middleware. The program has grown over the past 6 years, receiving a record number of nominations from customers around the globe. The winners were selected by a panel of judges that ranked each nomination across multiple different scoring categories. Congratulations to both Avea and ETS for winning this year’s Innovation Award for Identity Management. Identity Management Innovation Award 2012 Winner – Avea Company: Founded in 2004, AveA is the sole GSM 1800 mobile operator of Turkey and has reached a nationwide customer base of 12.8 million as of the end of 2011 Region: Turkey (EMEA) Products: Oracle Identity Manager, Oracle Identity Analytics, Oracle Access Management Suite Business Drivers: ·         To manage the agility and scale required for GSM Operations and enable call center efficiency by enabling agents to change their identity profiles (accounts and entitlements) rapidly based on call load. ·         Enhance user productivity and call center efficiency with self service password resets ·         Enforce compliance and audit reporting ·         Seamless identity management between AveA and parent company Turk Telecom Innovation and Results: ·         One of the first Sun2Oracle identity management migrations designed for high performance provisioning and trusted reconciliation built with connectors developed on the ICF architecture that provides custom user interfaces for  dynamic and rapid management of roles and entitlements along with entitlement level attestation using closed loop remediation between Oracle Identity Manager and Oracle Identity Analytics. ·         Dramatic reduction in identity administration and call center password reset tasks leading to 20% reduction in administration costs and 95% reduction in password related calls. ·         Enhanced user productivity by up to 25% to date ·         Enforced enterprise security and reduced risk ·         Cost-effective compliance management ·         Looking to seamlessly integrate with parent and sister companies’ infrastructure securely. Identity Management Innovation Award 2012 Winner – Education Testing Service (ETS)       See last year's winners here --Company: ETS is a private nonprofit organization devoted to educational measurement and research, primarily through testing. Region: U.S.A (North America) Products: Oracle Access Manager, Oracle Identity Federation, Oracle Identity Manager Business Drivers: ETS develops and administers more than 50 million achievement and admissions tests each year in more than 180 countries, at more than 9,000 locations worldwide.  As the business becomes more globally based, having a robust solution to security and user management issues becomes paramount. The organizations was looking for: ·         Simplified user experience for over 3000 company users and more than 6 million dynamic student and staff population ·         Infrastructure and administration cost reduction ·         Managing security risk by controlling 3rd party access to ETS systems ·         Enforce compliance and manage audit reporting ·         Automate on-boarding and decommissioning of user account to improve security, reduce administration costs and enhance user productivity ·         Improve user experience with simplified sign-on and user self service Innovation and Results: 1.    Manage Risk ·         Centralized system to control user access ·         Provided secure way of accessing service providers' application using federated SSO. ·         Provides reporting capability for auditing, governance and compliance. 2.    Improve efficiency ·         Real-Time provisioning to target systems ·         Centralized provisioning system for user management and access controls. ·         Enabling user self services. 3.    Reduce cost ·         Re-using common shared services for provisioning, SSO, Access by application reducing development cost and time. ·         Reducing infrastructure and maintenance cost by decommissioning legacy/redundant IDM services. ·         Reducing time and effort to implement security functionality in business applications (“onboard” instead of new development). ETS was able to fold in new and evolving requirement in addition to the initial stated goals realizing quick ROI and successfully meeting business objectives. Congratulations to the winners once again. We will be sure to bring you more from these Innovation Award winners over the next few months.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Trouble with Samba Domain

    - by Arkevius
    I'm having a bit of trouble setting up this Samba domain correctly. I'm getting an Access Denied error when trying to add a Windows XP machine to the domain. I'll go through my scenario in detail, but for those of you wanting a TLDR summary it'll be at the bottom of this post. I have HP Proliant server with Ubuntu 12.04 LTS installed. For this particular environment, I need this server to act as a PDC, file server, and print server. I began by updating and upgrading the packages (of course). Then went to install samba, gnome-desktop, wine, and cpanm. Samba was, of course, for the PDC and file/print services. The GUI was needed because a certain software has to be installed on there that needs a GUI. Wine was needed because the software is Windows-native. And cpanm was for a perl script I have running. For Samba, I went into the smb.conf file and enabled domain logons, changed the workgroup/domain name, the logon script for a per-group basis (netlogon/%g), enabled the netlogon and profiles share, and setup a couple of custom shares for the file service. The printer was added later, and seems to be working just fine. I then restarted the services, and used the net groupmap command to ensure my unix groups were mapped correctly to the Windows groups. After this, I went to a Windows box, and was able to successfully join the domain without a problem. After some fidgeting with the software to get it running on the win boxes from the server (it's a records management system program, which stores it's database files on the server), I went to add another computer to the domain. But now it's saying Access Denied. Before when I had this trouble it was because I forgot to add the group "machines" so Samba could create machine accounts. Thinking this was the case, I manually created the machine account to test this theory. However, it would still give me an Access Denied error. That must mean it has something to do with permissions now, correct? I've been fighting with this server for the past two weeks. If it's not one thing that;s wrong, then it's something else completely different. This would be the third time I've actually reinstalled everything to start over. I'll post snippets of my system settings below. If anything else is needed, just say the word and I'll gather up the info. The unix group 'domadmin' is the Domain Admins group. Samba Administrator account administrator:x:1000:1000:Administrator,,,:/home/administrator:/bin/bash Adminstrator's groups administrator adm cdrom sudo dip plugdev lpadmin sambashare domadmin crimestar Samba's Configuration FIle (a snippet anyways) [global] workgroup = CITYPD server string = BPDServer dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user domain logons = yes logon path = \\%L\srv\samba\profiles\%U logon script = logon.bat add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u domain master = yes usershare allow guests = yes [netlogon] comment = Network Logon Service path = /srv/samba/netlogon/%g guest ok = yes read only = yes browseable = no [profiles] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no write list = root, @lpadmin [crimestar] comment = "Crimestar DB" path = /srv/crimestar/db valid users = @domadmin, @crimestar admin users = administrator writeable = yes guest ok = no browseable = no create mask = 0666 directory mask = 0777 [crimestarfiles] path = /home/administrator/.wine/drive_c/crimestar admin users = administrator browseable = yes ls -la on /srv/samba/profiles drwxrwxrwx 2 root machines 4096 Nov 21 15:27 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. ls -la on /srv/samba/netlogon drwxr-xr-x 6 root root 4096 Nov 21 15:30 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. drwxr-xr-x 2 root root 4096 Nov 21 15:30 crimestar drwxr-xr-x 2 root root 4096 Nov 21 18:13 domadmin drwxr-xr-x 3 root root 4096 Nov 21 15:30 guests drwxr-xr-x 2 root root 4096 Nov 21 15:29 users GrouMap list Domain Users (S-1-5-21-2978508755-2341913247-928297747-513) -> users Domain Admins (S-1-5-21-2978508755-2341913247-928297747-512) -> domadmin Domain Guests (S-1-5-21-2978508755-2341913247-928297747-514) -> nogroup TLDR I'm getting an Access Denied error message while trying to join a windows box to a samba domain, even after I successfully joined another computer without a problem. System settings / files are quoted above. Anyone have any ideas or suggestions?

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

  • Managing a file-based public maven repository

    - by Roland Ewald
    I am looking for an easy way to manage a public file-based Maven repository. While we are using the open-source version of Artifactory internally, we now want to put a file-based repository of our published artifacts (and their dependencies) on a separate machine that is publicly available. There are several ways how to do this, but none of them seems ideal: Use Maven Dependency plugin: if it is configured correctly and executed with the goal dependency:copy-dependencies for the release-module of our project, it creates a local repository structure that is fine, but this structure does not contain the meta-data.xml files, nor the hash-sums. Use Artifactory to export repo: AFAIK Artifactory only allows to export a repository as a whole. This would include the non-published modules from our project (which would then need to be deleted manually). Also, all dependencies are sitting in another repository, so this needs to be done twice, and many dependencies are not even required by a published artifact (only by artifacts that are still for internal use only). Nevertheless, this method would also include the meta-data.xml files and the hash-sums for all files. To set up an initial version of the repository, I used a mixture of both methods: I first created the Maven repository for all required dependencies via dependency:copy-dependencies and then wrote a script to cherry-pick the meta-data.xml files (etc.) from Artifactory. This is terribly cumbersome, isn't there a better way to solve this? Maybe there is another Maven 3 - plugin that I am unaware of, or some other command-line tool that does the job? I basically just need a simple way to create a Maven repository that contains all artifacts a given artifact depends on (and no more), and also contains all meta-data expected in a remote repository. Any ideas?

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • c# How to Verify Signature, Loading PUBLIC KEY From PEM file?

    - by bbirtle
    I'm posting this in the hope it saves somebody else the hours I lost on this really stupid problem involving converting formats of public keys. If anybody sees a simpler solution or a problem, please let me know! The eCommerce system I'm using sends me some data along with a signature. They also give me their public key in .pem format. The .pem file looks like this: -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDe+hkicNP7ROHUssGNtHwiT2Ew HFrSk/qwrcq8v5metRtTTFPE/nmzSkRnTs3GMpi57rBdxBBJW5W9cpNyGUh0jNXc VrOSClpD5Ri2hER/GcNrxVRP7RlWOqB1C03q4QYmwjHZ+zlM4OUhCCAtSWflB4wC Ka1g88CjFwRw/PB9kwIDAQAB -----END PUBLIC KEY----- Here's the magic code to turn the above into an "RSACryptoServiceProvider" which is capable of verifying the signature. Uses the BouncyCastle library, since .NET apparently (and appallingly cannot do it without some major headaches involving certificate files): RSACryptoServiceProvider thingee; using (var reader = File.OpenText(@"c:\pemfile.pem")) { var x = new PemReader(reader); var y = (RsaKeyParameters)x.ReadObject(); thingee = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); var pa = new RSAParameters(); pa.Modulus = y.Modulus.ToByteArray(); pa.Exponent = y.Exponent.ToByteArray(); thingee.ImportParameters(pa); } And then the code to actually verify the signature: var signature = ... //reads from the packet sent by the eCommerce system var data = ... //reads from the packet sent by the eCommerce system var sha = new SHA1CryptoServiceProvider(); byte[] hash = sha.ComputeHash(Encoding.ASCII.GetBytes(data)); byte[] bSignature = Convert.FromBase64String(signature); ///Verify signature, FINALLY: var hasValidSig = thingee.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), bSignature);

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • jquery tab switching links so they jump to top of page?

    - by Evan
    I was given some great help by redsquare to get this far to change jquery tabs from links within the content, but I have one more issue that I'm looking for support on... When a user clicks a link to switch to a different tab from within the content, can i get the page to jump to the top of the page? Within my demo link, scroll to the bottom of the page to click the links and they switch tabs just perfectly, which you can confirm if you scroll up. So, I'm looking for the switch to also jump the user to the top of the page, so the user doesn't have to "scroll" to the top of the page to begin reading new content. Here's my demo: http://jsbin.com/etoku3/11 Existing code: <script type="text/javascript"> $(document).ready(function() { var $tabs = $("#container-1").tabs(); var changeTab = function(ev){ ev.preventDefault(); var tabIndex = this.hash.charAt(this.hash.length-1) -1; $tabs.tabs('select', tabIndex); }; $('a.tablink').click(changeTab); }); </script> Thank you so much! Evan

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >