Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 199/734 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • CodePlex Daily Summary for Wednesday, January 26, 2011

    CodePlex Daily Summary for Wednesday, January 26, 2011Popular ReleasesCatel - WPF and Silverlight MVVM library: 1.1: (+) Styles can now be changed dynamically, see Examples application for a how-to (+) ViewModelBase class now have a constructor that allows services injection (+) ViewModelBase services can now be configured by IoC (via Microsoft.Unity) (+) All ViewModelBase services now have a unit test implementation (+) Added IProcessService to run processes from a viewmodel with directly using the process class (which makes it easier to unit test view models) (*) If the HasErrors property of DataObjec...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...MVC Foolproof Validation: Beta 0.9.4042: Fixed a few bugs and added ASP.NET MVC 3 support (Including Unobtrusive JavaScript support).Kooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??Developer Guidance - Onboarding Windows Phone 7: Fuel Tracker Source: Release NotesThis project is almost complete. We'll be making small updates to the code and documentation over the next week or so. We look forward to your feedback. This documentation and accompanying sample application will get you started creating a complete application for Windows Phone 7. You will learn about common developer issues in the context of a simple fuel-tracking application named Fuel Tracker. Some of the tasks that you will learn include the following: Creating a UI that...Password Generator: 2.2: Parallel password generation Password strength calculation ( Same method used by Microsoft here : https://www.microsoft.com/protect/fraud/passwords/checker.aspx ) Minor code refactoringVisual Studio 2010 Architecture Tooling Guidance: Spanish - Architecture Guidance: Francisco Fagas http://geeks.ms/blogs/ffagas, Microsoft Most Valuable Professional (MVP), localized the Visual Studio 2010 Quick Reference Guidance for the Spanish communities, based on http://vsarchitectureguide.codeplex.com/releases/view/47828. Release Notes The guidance is available in a xps-only (default) or complete package. The complete package contains the files in xps, pdf and Office 2007 formats. 2011-01-24 Publish version 1.0 of the Spanish localized bits.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager the html generation has been optimized, the html page size is much smaller nowFacebook Graph Toolkit: Facebook Graph Toolkit 0.6: new Facebook Graph objects: Application, Page, Post, Comment Improved Intellisense documentation new Graph Api connections: albums, photos, posts, feed, home, friends JSON Toolkit upgraded to version 0.9 (beta release) with bug fixes and new features bug fixed: error when handling empty JSON arrays bug fixed: error when handling JSON array with square or large brackets in the message bug fixed: error when handling JSON obejcts with double quotation in the message bug fixed: erro...Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-01-23: Code samples for Visual Studio 2008BloodSim: BloodSim - 1.4.0.0: This version requires an update for WControls.dll. - Removed option to use Old Rune Strike - Fixed an issue that was causing Ratings to not properly update when running Progressive simulations - Ability data is now loaded from an XML file in the BloodSim directory that is user editable. This data will be reloaded each time a fresh simulation is run. - Added toggle for showing Graph window. When unchecked, output data will instead be saved to a text file in the BloodSim directory based on the...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (3): Instructions for installation: http://www.galasoft.ch/mvvm/installing/manually/ Includes the hotfix templates for Windows Phone 7 development. This is only relevant if you didn't already install the hotfix described at http://blog.galasoft.ch/archive/2010/07/22/mvvm-light-hotfix-for-windows-phone-7-developer-tools-beta.aspx.Community Forums NNTP bridge: Community Forums NNTP Bridge V42: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has added some features / bugfixes: Bugfix: Decoding of Subject now also supports multi-line subjects (occurs only if you have very long subjects with non-ASCII characters)Minecraft Tools: Minecraft Topographical Survey 1.3: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds automatic block list updates, so MTS will recognize blocks added in game updates properly rather than drawing them in bright pink. New in this version of MTS: Support for all new blocks added since the Halloween update Auto-update of blockcolors.xml to support future game updates A splash screen that shows while the program searches for upd...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14996.000: New Features: ============= This release is just compiled against the latest release of JetBrains ReSharper 5.1.1766.4 Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes...MediaScout: MediaScout 3.0 Preview 4: Update ReleaseMFCMAPI: January 2011 Release: Build: 6.0.0.1024 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeAutoLoL: AutoLoL v1.5.4: Added champion: Renekton Removed automatic file association Fix: The recent files combobox didn't always open a file when an item was selected Fix: Removing a recently opened file caused an errorNew Projects? ! :): AskAnswerDone (? ! :)) is a free and open source Q&A platform. Your user choses a problem they're having (like "ERROR: 123" or "THE PICTURES DONT SHOW"), choses something about the program that they can help with, and gets their answer while helping someone else. Chat by Mibbit.BusinessControl2: BusinessControl project. v. 2Citizen Journalism Network Server: An educational project that will hopefully one day prove useful. Initially a simple ASP.NET MVC 3 implementation of the Atom Publishing Protocol, and later have features for federated servers, geolocation, advanced searching, and ratings.ciws-distr: CIWS distrContentFinder: Help user to find the content in every file in your folder.ContentFinder Support Regex Search and the UI will not freeze any more.CSC446: database driven website done with silverlightElos: Projeto ElosEnhanced Host File Manager: Enhanced Host File Editor / Manager Intended for web development (developers or designers) to switch between development and live easily, hassle free. Full access to the host file is required. It is assumed that host file is at default location: C:\Windows\System32\drivers\etcHelloWorld: As a sample center, HelloWorld sketches the skeletons of most Microsoft development techniques. Each sample is elaborately selected, composed, and documented to demonstrate one frequently-asked/tested/used scenario based on my experience as a support engineer. InkSpot: Adds SVG runtime support to WPF applications.ITaCS Change Password web part: This web part allows users to change their local or Active Directory password from within a MOSS or WSS Site.JoscalSoftware: At JoscalSoftware you can find programs designed (mostly) in Visual Basic 2010 Express. Most programs are to do with text editors, and webbrowsers.K.Frame: A open .net structual for enterprice application with MVC,iBatis.Net,WCF,etc.LogExpert: Windows tail program and log file analyzer.MasterGuitarReader: This project is a free guitar tab readerOpalis Scheduled Tasks Integration Pack: A Opalis integration pack for manipulating scheduled tasks on windows systems.OpenInsure: OpenInsure is an OpenSource Insurance Agent automation system. It is being developed using the .Net 4.0 frame work and is using a SQL 2008 backend. Orchard Hyperlink Custom Field: The Hyperlink module introduces a new field to store hyperlinks as custom types. Includes URL, label, title, and target. It's developed in C# as a plugin to the Orchard CMS project.Orchard Stars: A simple five-star rating Orchard module.PerformancePoint 2010 Content Deployment Tool: The PerformancePoint 2010 Content Deployment Tool is a command-line tool that allows you to deploy PerformancePoint content using an existing ddwx file and can be used to migrate content from development to production or between site collections on the same SharePoint server.Plat Manager / Real Estate management application: The administration module is meant to manage 3 web clients with their real estate databases. The main feature is ability to automatically generate the interface of the module based on the information pulled from the database. PHP, MySQL, MVC Kohana, Doctrene ORM, Ajax, ExtJSSalesManager: SalesManagerSercury: ????????,????WCF??????????????。SGPF: The team does not have nothing to declare here!Sitefinity Social Widgets Contrib: Sitefinity Social Widget Contrib makes it easier for GiveCamp Charities and other Sitefinity users to pull their social feeds into your site. You'll no longer have to build one off controls. It's developed in a combination of C# and JQuery.StackHash Plugin SDK & Sample Plugins (for WinQual / Windows Error Reporting): Plugin SDK and sample command line, email and FogBugz plugins for StackHash. StackHash is a tool that helps developers access crash reports from Microsoft's WinQual service (Windows Error Reporting). The SDK is designed to support synchronizing StackHash data with bug trackers.TIC: ticTwedge: Twedge is a customizable Silverlight-based Twitter badge, showing the latst tweets, based on your specific search term.Web Scripting and Content Creation Code Samples: Sample code used in the Web Scripting and Content Creation course at the University of HertfordshirewebSurfer - a test tool: Windows Workflow Foundation, WWF, test web pagesWLCompus: WLCompusYellow Rice: Yellow RiceZDStar Enterprise Framework: ZDStar Enterprise Framework for Small & Medium Size Enterprises.

    Read the article

  • CodePlex Daily Summary for Tuesday, January 25, 2011

    CodePlex Daily Summary for Tuesday, January 25, 2011Popular ReleasesKooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??Developer Guidance - Onboarding Windows Phone 7: Fuel Tracker Source: Release NotesThis project is almost complete. We'll be making small updates to the code and documentation over the next week or so. We look forward to your feedback. This documentation and accompanying sample application will get you started creating a complete application for Windows Phone 7. You will learn about common developer issues in the context of a simple fuel-tracking application named Fuel Tracker. Some of the tasks that you will learn include the following: Creating a UI that...Password Generator: 2.2: Parallel password generation Password strength calculation ( Same method used by Microsoft here : https://www.microsoft.com/protect/fraud/passwords/checker.aspx ) Minor code refactoringASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager the html generation has been optimized, the html page size is much smaller nowFacebook Graph Toolkit: Facebook Graph Toolkit 0.6: new Facebook Graph objects: Application, Page, Post, Comment Improved Intellisense documentation new Graph Api connections: albums, photos, posts, feed, home, friends JSON Toolkit upgraded to version 0.9 (beta release) with bug fixes and new features bug fixed: error when handling empty JSON arrays bug fixed: error when handling JSON array with square or large brackets in the message bug fixed: error when handling JSON obejcts with double quotation in the message bug fixed: erro...Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-01-23: Code samples for Visual Studio 2008BloodSim: BloodSim - 1.4.0.0: This version requires an update for WControls.dll. - Removed option to use Old Rune Strike - Fixed an issue that was causing Ratings to not properly update when running Progressive simulations - Ability data is now loaded from an XML file in the BloodSim directory that is user editable. This data will be reloaded each time a fresh simulation is run. - Added toggle for showing Graph window. When unchecked, output data will instead be saved to a text file in the BloodSim directory based on the...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (3): Instructions for installation: http://www.galasoft.ch/mvvm/installing/manually/ Includes the hotfix templates for Windows Phone 7 development. This is only relevant if you didn't already install the hotfix described at http://blog.galasoft.ch/archive/2010/07/22/mvvm-light-hotfix-for-windows-phone-7-developer-tools-beta.aspx.Community Forums NNTP bridge: Community Forums NNTP Bridge V42: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has added some features / bugfixes: Bugfix: Decoding of Subject now also supports multi-line subjects (occurs only if you have very long subjects with non-ASCII characters)Minecraft Tools: Minecraft Topographical Survey 1.3: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds automatic block list updates, so MTS will recognize blocks added in game updates properly rather than drawing them in bright pink. New in this version of MTS: Support for all new blocks added since the Halloween update Auto-update of blockcolors.xml to support future game updates A splash screen that shows while the program searches for upd...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14996.000: New Features: ============= This release is just compiled against the latest release of JetBrains ReSharper 5.1.1766.4 Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes...jQuery ASP.Net MVC Controls: Version 1.2.0.0: jqGrid 3.8 support jquery 1.4 support New and exciting features Many bugfixes Complete separation from the jquery, & jqgrid codeMediaScout: MediaScout 3.0 Preview 4: Update ReleaseCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1: Coding4Fun.Phone.Toolkit v1MFCMAPI: January 2011 Release: Build: 6.0.0.1024 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeAutoLoL: AutoLoL v1.5.4: Added champion: Renekton Removed automatic file association Fix: The recent files combobox didn't always open a file when an item was selected Fix: Removing a recently opened file caused an errorDotNetNuke® Community Edition: 05.06.01: Major Highlights Fixed issue to remove preCondition checks when upgrading to .Net 4.0 Fixed issue where some valid domains were failing email validation checks. Fixed issue where editing Host menu page settings assigns the page to a Portal. Fixed issue which caused XHTML validation problems in 5.6.0 Fixed issue where an aspx page in any subfolder was inaccessible. Fixed issue where Config.Touch method signature had an unintentional breaking change in 5.6.0 Fixed issue which caused...iTracker Asp.Net Starter Kit: Version 3.0.0: This is the inital release of the version 3.0.0 Visual Studio 2010 (.Net 4.0) remake of the ITracker application. I connsider this a working, stable application but since there are still some features missing to make it "complete" I'm leaving it listed as a "beta" release. I am hoping to make it feature complete for v3.1.0 but anything is possible.New Projectsarchicop: Make your .NET Code Clean with ArchiCop! ArchiCop is a Visual Studio tool to manage complex .NET code and achieve high Code Quality. It's very simple to use. With ArchiCop you can validate references, project properties and group your projects in layers.Blitzkrieg Board Game (1965): I'm currently working on making Blitzkrieg using .NET 4 and WPF (http://www.boardgamegeek.com/boardgame/4168/blitzkrieg). The goal of this project is to make the game extremely mod-able, by allowing the end-user to change the sprites, game object types, map cell types, and more.Candy: Candy is modern gene-fishing platform which enables Biologists to find candidate genes with ease. Candy is implemented in C# and uses Silverlight to deliver a rich user experience.CMScreator: Content Management System CreatorDecoratorSharp: DecoratorSharp is a Decorator from python.Dorm (beta) - .Net ORM: It’s a light-weight ORM framework for .Net. The idea is to provide all the productivity of an ORM tool without abstracting entirely the database, therefore giving the developer a finer degree of control on data manipulation.DxStudyPro: DirectX Study ProFRCRobotCode: Lightsabers FRC robot code.grouptalk: grouptalkHeejooShop: Heejoo ShopIC Colorful: The aim of this project is to implement an algorithm allowing color-blind people to distinguish colors that they fail to distinguish in regular circumstances. This project is implemented as a DirectShow filter, thus could be used as a video codec as well as live video stream traItzben: Universally useful XAML behaviors, styles, and value converters. Developed in C# for WPF, Silverlight, and WP7.JSON Toolkit: JSON Toolkit is a .NET library written in C# used to handle JSON objects, convert string to JSON objects and obtainn values from them.Materials Data Centre: This project is to create aa materials data centre; a repository for experimental data. See www.materialsdatacentre.com It is built on Microsoft SharePoint with functionality exposed through Web Services.OpenCLI: OpenCLI brings a linux style CLI written in .NET. It acts as a portal from linux style commands, to their commands on the windows system. It is useful to Sysadmins, who wish to deny their uses to the power of Command Prompt, yet want them to be able to user a CLI.Pinguim Cook .Net - Automação de Restaurantes: Em CriaçãoPixination: Pixination is a tiny color picker to get any color from the screen. You will have a collection of color patterns to work with. Developed on C# with Framework 2.0Project Acolythe: Project Acolythe is the codename for a new Open Source RPG. Based upon different ideas from currently available RPGs the Game will have an incredible depth in gameplay. The Team behind is currently a bit small, so if you are interessted to help us out, leave a note ;)Propono: Blogging application based on ASP.NET MVCSharePoint 2010 Excel Export Feature: This feature can be used to export SharePoint lists to Excel Files (.xslx). You can upload templates for the excel export.SilverlightMedia JavaScript Library: A JavaScript library making it quick and easy to work with any web-based media player built with Silverlight. You can take control of any Silverlight media player using pure JavaScript. This facilitates useful scenarios e.g. creating HTML controls, displaying player status, etc.Soluciones: El project contiene herramientas de marketing para todo tipo de tama~o de empresas. Estaba basada en el standard definido por el mercado mismo. Estare publicando las descripcion del project en mas detalle.Soluciones Integrales: El projecto ve la necesidad de poder dar soporte al area de Marketing La finalidad de este proyecto es la implementacion de herramientas de marketing para que las empresas puedan ser mas eficientes al momento de procesar tareas de marketing. Les permitira: - enviar correos masivos - actualizacion de websites a distancia - ingreso de nuevos clientes a dasedatosSparkle Engine: Sparkle engine is a library for various types of effects. Currently I'm working on creating some basic text effects. Storyline Toolbox: Storyline Toolbox 2 is a web platform helping visualize the Storyline method. Version 1 of the Storyline Toolbox was developed in Perl by Linpro (now Redpill) in 2003. This project aims to providy a better user experience for students and teachers using the Storyline Toolbox.TFS Workbench Plugins: This project is dedicated to extending the functionality of the greatest Scrum tool, TFS Workbench.Tireguit Nmon Analyzer: An AIX and linux Nmon Analyzer; It uses SQL Compact 3.5 and C#. For more information on Nmon please visit http://nmon.sourceforge.net/pmwiki.php.uglifycs: Uglify.JS (https://github.com/mishoo/UglifyJS) and jsBeautifier (http://jsbeautifier.org) wrapped using JavaScript.NET (http://javascriptnet.codeplex.com) for use in .NET projects.usea: my study projectWindows Phone Cryptographic Storage: A C# library for easily storing and protecting data with a password on Windows Phone. The data is encrypted and authenticated with keys derived from a user provided password. Yahoo! Decoder: Yahoo! Decoder helps you read Yahoo! messenger chat archives offline with full support of smilies and links.????: “????”?????????????????????WinForm????,??C#????。?????????,????(2??),????!

    Read the article

  • Creating static NAT blocks outbound traffic Cisco ASA

    - by natediggs
    Hi Everyone, I have two web servers sitting behind a Cisco ASA 5505, which I don't have much experience with. I'm trying to create two static NATs. One static NAT that goes to xx.xx.xx.150 and another that goes to xx.xx.xx.151. I've created the static NAT for the .150 web server and it works FINE. Incoming and outgoing traffic work great. This is the staging web server. I now need to duplicate the setup for the production web server. So, I connect the webserver to the firewall, change the public IP address on one of the NICs reboot the server and I have outbound internet access. Then I run the command: static (inside,outside) xx.xx.xx.150 192.168.1.x which is successful. I then run the command: access-list acl-outside permit tcp any host xx.xx.xx.150 eq 80 Which is successful. I then try to browse the internet and I get nothing. I try to telnet in through port 80 and I get nothing (though I'm guessing because the response to the telnet request is being blocked). I've tried this with the production web server and then I tried it with another web server that is for internal testing and have the exact same problem. Both work fine until I run the static NAT rule and then no outbound internet access. I have a feeling that it's something simple that I'm missing, but my limited experience with this device is killing me. Below I've pasted the current configuration. I'm currently trying to get this to work on the .153 server which is the internal testing server. Once I can verify that works, I'll try it with production. : Saved : ASA Version 8.2(4) ! hostname QG domain-name XX.com enable password passwd names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address XX.XX.XX.148 255.255.255.0 ! interface Vlan3 shutdown no forward interface Vlan1 nameif dmz security-level 50 ip address dhcp ! boot system disk0:/asa824.bin ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name fw.XXgroup.com same-security-traffic permit inter-interface access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.153 eq www access-list inside_access_in extended permit ip 192.168.1.0 255.255.255.0 any access-list inside_nat0_outbound extended permit ip any 192.168.1.32 255.255.255.240 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 mtu dmz 1500 ip local pool VPNIPs 192.168.1.35-192.168.1.44 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-635.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) XX.XX.XX150 192.168.1.100 netmask 255.255.255.255 static (inside,outside) XX.XX.XX153 192.168.1.102 netmask 255.255.255.255 access-group acl-outside in interface outside route outside 0.0.0.0 0.0.0.0 XX.XX.XX129 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authorization command LOCAL http server enable http 192.168.1.0 255.255.255.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group1 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication crack encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal client-update enable telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd dns 208.77.88.4 interface inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn enable outside svc image disk0:/sslclient-win-1.1.0.154.pkg 1 svc image disk0:/anyconnect-win-2.5.2019-k9.pkg 2 svc enable group-policy ATSAdmin internal group-policy ATSAdmin attributes dns-server value 208.77.88.4 208.85.174.9 vpn-tunnel-protocol IPSec svc webvpn webvpn url-list none svc keep-installer installed svc rekey method ssl svc ask enable username qgadmin password /oHfeGQ/R.bd3KPR encrypted privilege 15 username benl password 0HNIGQNI0uruJvhW encrypted privilege 0 username benl attributes vpn-group-policy ATSAdmin username kuzma password rH7MM7laoynyvf9U encrypted privilege 0 username kuzma attributes vpn-group-policy ATSAdmin username nate password BXHOURyT37e4O5mt encrypted privilege 0 username nate attributes vpn-group-policy ATSAdmin tunnel-group ATSAdmin type remote-access tunnel-group ATSAdmin general-attributes address-pool VPNIPs default-group-policy ATSAdmin tunnel-group SSLVPN type remote-access tunnel-group SSLVPN general-attributes address-pool VPNIPs default-group-policy ATSAdmin ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global privilege cmd level 3 mode exec command perfmon privilege cmd level 3 mode exec command ping privilege cmd level 3 mode exec command who privilege cmd level 3 mode exec command logging privilege cmd level 3 mode exec command failover privilege show level 5 mode exec command running-config privilege show level 3 mode exec command reload privilege show level 3 mode exec command mode privilege show level 3 mode exec command firewall privilege show level 3 mode exec command interface privilege show level 3 mode exec command clock privilege show level 3 mode exec command dns-hosts privilege show level 3 mode exec command access-list privilege show level 3 mode exec command logging privilege show level 3 mode exec command ip privilege show level 3 mode exec command failover privilege show level 3 mode exec command asdm privilege show level 3 mode exec command arp privilege show level 3 mode exec command route privilege show level 3 mode exec command ospf privilege show level 3 mode exec command aaa-server privilege show level 3 mode exec command aaa privilege show level 3 mode exec command crypto privilege show level 3 mode exec command vpn-sessiondb privilege show level 3 mode exec command ssh privilege show level 3 mode exec command dhcpd privilege show level 3 mode exec command vpn privilege show level 3 mode exec command blocks privilege show level 3 mode exec command uauth privilege show level 3 mode configure command interface privilege show level 3 mode configure command clock privilege show level 3 mode configure command access-list privilege show level 3 mode configure command logging privilege show level 3 mode configure command ip privilege show level 3 mode configure command failover privilege show level 5 mode configure command asdm privilege show level 3 mode configure command arp privilege show level 3 mode configure command route privilege show level 3 mode configure command aaa-server privilege show level 3 mode configure command aaa privilege show level 3 mode configure command crypto privilege show level 3 mode configure command ssh privilege show level 3 mode configure command dhcpd privilege show level 5 mode configure command privilege privilege clear level 3 mode exec command dns-hosts privilege clear level 3 mode exec command logging privilege clear level 3 mode exec command arp privilege clear level 3 mode exec command aaa-server privilege clear level 3 mode exec command crypto privilege cmd level 3 mode configure command failover privilege clear level 3 mode configure command logging privilege clear level 3 mode configure command arp privilege clear level 3 mode configure command crypto privilege clear level 3 mode configure command aaa-server prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily Cryptochecksum:0ed0580e151af288d865f4f3603d792a : end asdm image disk0:/asdm-635.bin no asdm history enable

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Can't focus fancybox iframe input

    - by bswinnerton
    So I'm using fancybox to load up a login iframe, and I would like it onComplete to bring focus to the username field. If someone could take a look at this code and let me know what's wrong, that'd be great. Gracias. /* Function to resize the height of the fancybox window */ (function($){ $.fn.resize = function(width, height) { if (!width || (width == "inherit")) inner_width = parent.$("#fancybox-inner").width(); if (!height || (height == "inherit")) inner_height = parent.$("#fancybox-inner").height(); inner_width = width; outer_width = (inner_width + 14); inner_height = height; outer_height = (inner_height + 14); parent.$("#fancybox-inner").css({'width':inner_width, 'height':inner_height}); parent.$("#fancybox-outer").css({'width':outer_width, 'height':outer_height}); } })(jQuery); $(document).ready(function(){ var pasturl = parent.location.href; $("a.iframe#register").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 450, 'height' : 385, 'scrolling' : 'no', 'autoScale' : false, 'autoDimensions' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(450,385); }, 'onComplete' : function() { $("#realname").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("a.iframe#login").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 400, 'height' : 250, 'scrolling' : 'no', 'autoScale' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(400,250); }, 'onComplete' : function() { $("#login_username").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("#register").click(function() { $("#login_form").hide(); $(".registertext").hide(); $.fn.resize(450,385); $("label").addClass("#register_form label"); }); $("#login").click(function() { $.fn.resize(400,250); $("label").addClass("#login_form label"); }); $("#register_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#realname").val().length < 1 || $("#password").val().length < 1 || $("#username").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,405); return false; } if ($("#password").val() != $("#password2").val()) { $("#no_pass_match").addClass("errormsg").show().resize(); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/create_submit.php", { realname:$('#realname').val(), email:$('#email').val(), username:$('#username').val(), password:MD5($('#password').val()), rand:Math.random() } ,function(data){ if(data == "OK"){ $(".registerbox").hide(); $.fancybox.hideActivity(); $.fn.resize(inherit,300); $("#successful_login").addClass("successfulmsg").show(); } else if(data == "user_taken"){ $.fancybox.hideActivity(); $("#user_taken").addClass("errormsg").show().resize(inherit,405); $("#username").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird. Give me a shout at [email protected]."); } return false; }); return false; }); $("#login_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#login_username").val().length < 1 || $("#login_password").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,280); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/login_submit.php", { username:$('#login_username').val(), password:MD5($('#login_password').val()), rand:Math.random() } ,function(data){ if(data == "authenticated"){ $(".loginbox").hide(); $(".registertext").hide(); $.fancybox.hideActivity(); $("#successful_login").addClass("successfulmsg").show(); parent.document.location.href=pasturl; } else if(data == "no_user"){ $.fancybox.hideActivity(); $("#no_user").addClass("errormsg").show().resize(); $("#login_username").val(""); $("#login_password").val(""); } else if(data == "wrong_password"){ $.fancybox.hideActivity(); $("#wrong_password").addClass("warningmsg").show().resize(); $("#login_password").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird."); } return false; }); return false; }); }); And here is the HTML: <p><a class="iframe" id="login" href="/login/">Login</a></p>

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • WCF fails to deserialize correct(?) response message security headers (Security header is empty)

    - by Soeteman
    I'm communicating with an OC4J webservice, using a WCF client. The client is configured as follows: <basicHttpBinding> <binding name="MyBinding"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""/> <message clientCredentialType="UserName" algorithmSuite="Default"/> </security> </binding> My clientcode looks as follows: ServicePointManager.CertificatePolicy = new AcceptAllCertificatePolicy(); string username = ConfigurationManager.AppSettings["user"]; string password = ConfigurationManager.AppSettings["pass"]; // client instance maken WebserviceClient client = new WebserviceClient(); client.Endpoint.Binding = new BasicHttpBinding("MyBinding"); // credentials toevoegen client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; //uitvoeren request var response = client.Ping(); I've altered the CertificatePolicy to accept all certificates, because I need to insert Charles (ssl proxy) in between client and server to intercept the actual Xml that is sent across te wire. My request looks as follows: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2010-04-01T09:47:01.161Z</u:Created> <u:Expires>2010-04-01T09:52:01.161Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-9b39760f-d504-4e53-908d-6125a1827aea-21"> <o:Username>user</o:Username> <o:Password o:Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username- token-profile-1.0#PasswordText">pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://mynamespace.org/wsdl"> <request xmlns="" xmlns:a="http://mynamespace.org/wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <a:IsgrStsRequestTypeUser> <a:prdCode>LEPTO</a:prdCode> <a:sequenceNumber i:nil="true" /> <a:productionType i:nil="true" /> <a:statusDate>2010-04-01T11:47:01.1617641+02:00</a:statusDate> <a:ubn>123456</a:ubn> <a:animalSpeciesCode>RU</a:animalSpeciesCode> </a:IsgrStsRequestTypeUser> </request> </getPrdStatus> </s:Body> </s:Envelope> In return, I receive the following response: <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://mynamespace.org/wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:getPrdStatusResponse> <result> <ns0:IsgrStsResponseTypeUser> <ns0:prdCode>LEPTO</ns0:prdCode> <ns0:color>green</ns0:color> <ns0:stsCode>LEP1</ns0:stsCode> <ns0:sequenceNumber xsi:nil="1" /> <ns0:productionType xsi:nil="1" /> <ns0:IAndRCode>00</ns0:IAndRCode> <ns0:statusDate>2010-04-01T00:00:00.000+02:00</ns0:statusDate> <ns0:description>Gecertificeerd vrij</ns0:description> <ns0:ubn>123456</ns0:ubn> <ns0:animalSpeciesCode>RU</ns0:animalSpeciesCode> <ns0:name>gecertificeerd vrij</ns0:name> <ns0:ranking>17</ns0:ranking> </ns0:IsgrStsResponseTypeUser> </result> </ns0:getPrdStatusResponse> </env:Body> </env:Envelope> Why can't WCF deserialize this response header? I'm getting a "Security header is empty" exception: Server stack trace: at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message& message, TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout) at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Who knows what is going on here? I've already tried Rick Strahl's suggestion and removed the timestamp from the request header. Any help greatly appreciated!

    Read the article

  • Rails User-Profile model challenges

    - by Craig
    I am attempting to create an enrollment process similar to SO's: route to an OpenID provider provider returns the user's information to the UsersController (a guess) UsersController creates user, then routes to the ProfilesController's new or edit action. For now, I'm simply trying to create the user, then route to the ProfilesController's new or edit action (not sure which I should be using). Here's what I have thus far: Models: class User < ActiveRecord::Base has_one :profile end class Profile < ActiveRecord::Base belongs_to :user end Routes: map.resources :users do |user| user.resource :profile end new_user_profile GET /users/:user_id/profile/new(.:format) {:controller=>"profiles", :action=>"new"} edit_user_profile GET /users/:user_id/profile/edit(.:format) {:controller=>"profiles", :action=>"edit"} user_profile GET /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"show"} PUT /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"update"} DELETE /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"destroy"} POST /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"create"} users GET /users(.:format) {:controller=>"users", :action=>"index"} POST /users(.:format) {:controller=>"users", :action=>"create"} new_user GET /users/new(.:format) {:controller=>"users", :action=>"new"} edit_user GET /users/:id/edit(.:format) {:controller=>"users", :action=>"edit"} user GET /users/:id(.:format) {:controller=>"users", :action=>"show"} PUT /users/:id(.:format) {:controller=>"users", :action=>"update"} DELETE /users/:id(.:format) {:controller=>"users", :action=>"destroy"} Controllers: class UsersController < ApplicationController # generate new-user form def new @user = User.new end # process new-user-form post def create @user = User.new(params[:user]) if @user.save redirect_to new_user_profile_path(@user) ... end end # generate edit-user form def edit @user = User.find(params[:id]) end # process edit-user-form post def update @user = User.find(params[:id]) respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(users_path) } format.xml { head :ok } ... end end end class ProfilesController < ApplicationController before_filter :get_user def get_user @user = User.find(params[:user_id]) end # generate new-profile form def new @user.profile = Profile.new @profile = @user.profile end # process new-profile-form post def create @user.profile = Profile.new(params[:profile]) @profile = @user.profile respond_to do |format| if @profile.save flash[:notice] = 'Profile was successfully created.' format.html { redirect_to(@profile) } format.xml { render :xml => @profile, :status => :created, :location => @profile } ... end end end # generate edit-profile form def edit @profile = @user.profile end # generate edit-profile-form post def update @profile = @user.profile respond_to do |format| if @profile.update_attributes(params[:profile]) flash[:notice] = 'Profile was successfully updated.' # format.html { redirect_to(@profile) } format.html { redirect_to(user_profile(@user)) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @profile.errors, :status => :unprocessable_entity } end end end Edit-User View: ... <% form_for(@user) do |f| %> ... New-Profile View: ... <% form_for([@user,@profile]) do |f| %> .. I'm having two problems: When saving an edit to the User model, the UsersController attempts to route to http://localhost:3000/users/1/profile.%23%3Cprofile:0x10438e3e8%3E, instead of http://localhost:3000/users/1/profile When the new-profile form is being rendered, it throws an error that reads: undefined method `user_profiles_path' for # Is it better to create a blank profile when the user is created (in the UsersController), then edit it OR follow the rest-ful convention of creating the profile in the ProfilesController (as I have done)? What am I missing? I did review Associating Two Models in Rails (user and profile), but it didn't address my needs. Thanks for your time.

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    - by RoyOsherove
    I’ve been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Here’s how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazon’s sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and click “Launch Instance” . From the “Quick Start” tab I selected from the list: “Getting Started on Microsoft Windows Server 2008 (AMI Id: ami-c5e40dac)” .  it’s good enough to just run teamcity. In the instance details, I used the default (Small instance, 1.7 GB mem). You might want to choose a close availability zone based on where you are. We want to “Launch instances” so click continue. Select the default kernel, RAM disk and all. No need to enable monitoring for now (you can do that later). click continue. If you don’t have a key pair, you will be prompted to create one. Once you do, select it in the list. Now you’ll be prompted to create a security group. I named mine “TC” as in “TeamCity”. each group is a bunch of settings on which ports can be let through into and out of a hosted machine.  keep it as the default settings. We will change them later. Click continue,  review and then click “Launch”. Now you’ll be able to see the new instance in the running instances list on your site. Now, you need to install stuff on that instance (TeamCity!) . To do that, you’ll need to Remote desktop into that instance. To do that, we’ll get the admin password for that instance: Check it on the list, and click “Instance Actions” - “Get Windows Admin Password”. You might have to wait about 10 minutes or so for the password to be generated for you. Once you have the password, you will remote desktop (start-run-‘mstsc’) into the instance. It’s address is a dns address shown below the list under “Public DNS”. it looks something like: ec2-256-226-194-91.compute-1.amazonaws.com Once you’re inside the instance – you’ll need to open IE (it is in hardened mode so you’ll have to relax its security settings to download stuff). I first downloaded chrome and using chrome I downloaded TeamCity. Note that the download speed is FAST. several MBs per second. To be able to see TeamCity from the outside, you will need to open the advanced firewall settings inside the remote machine, and add incoming and outgoing rules for port 80 (HTTP). Once you do that, you should be able to see the machine from the outside. If you still can’t, see the next step. I also enabled ports 9090 since I will use this machine to create an agent image later as well. Now configure the security group (TC) to enable talking to agents: IN the EC2 dashboard click on “Security Groups” and select your group. To add a rule, click on the empty list under the ‘protocol’ header. select TCP. from and ‘to’ ports are 9090. source ip is 0.0.0.0/0 (every ip is allowed). click “Save.  Also make sure you can see “HTTP” tcp 80 in that list. if you can’t see it, add it or you won’t be able to browse to the machine’s teamcity server home page. I also set an elastic IP for the machine: so I always have the same IP for the machine instance. Allocate and set one through the”Elastic IP” link on the EC2 dashboard.   you should now have a working instance of teamcity.   Now let’s create an agent image. Repeat steps 1-9, but this time, make sure you select a machine that fits what an agent might do. I selected Instance type – Hihg-CPU medium machine,  that is much faster. On that machine, I installed what I needed (VS 2010, PostSharp etc..). downloading VS 2010 from MSDN (2 GB took less than 10 min!) Now, instead of installing teamcity, browse using the browser to the teamcity homepage (from within the remote machine). go to the Administration page, and click the upper right link “Install agents”. Install the agent on he local machine – set it to the IP or DNS of the running TeamCity server. That way you’ll be able to check their connectivity live before making this machine your official agent image to reuse. Once the agent is installed, see that the TC server can see it and use it. see steps 13-14 above if they can’t. Once it works, you can take steps to make this image your agent image to be reused. next, here is a copy-paste of several steps to take from http://confluence.jetbrains.net/display/TCD5/Setting+Up+TeamCity+for+Amazon+EC2 Configure system so that agent it is started on machine boot (and make sure TeamCity server is accessible on machine boot). Test the setup by rebooting machine and checking that the agent connects normally to the server. Prepare the Image for bundling: Remove any temporary/history information in the system. Stop the agent (under Windows stop the service but leave it in Automatic startup type) Delete content agent logs and temp directories (not necessary) Delete "<Agent Home>/conf/amazon-*" file (not necessary) Change config/buildAgent.properties to remove properties: name, serverAddress, authToken (not necessary)   Now, we need to: Make AMI from the running instance. Configure TeamCity EC2 support on TeamCity server. Making an AMI: Check the instance of the agent in the EC2 dashboard instance list, and select instance actions->Create Image (EBS AMI) you’ll see the image pending in the APIs list in the EC2 dashboard. this could take 30 minutes or more. meanwhile we can configure the could support in the teamcity server. COPY THE AMI ID to the clipboard (looks like ami-a88aa4ce) Configuring TeamCity for Cloud: In TeamCity, click on “Agents” and then on “Cloud” tab. this is where you will control your cloud agents. to configure new cloud agents based on APIs, click on the right link to the “configuration page” Create a new profile and select AMazon EC2 as cloud type. Use your AMI ID that you copied to the clipboard into the “Images” field. Select an availability zone that is the same as the one your instance is running on for best communication perf between them make sure you select the ‘TC’ security group hopefully, that should be it, and teamcity will try to instantiate new instances on demand. Note that it may take around 10 minutes for an agent to become available to teamcity from the time it’s started.

    Read the article

  • How to Easily Put a Windows PC into Kiosk Mode With Assigned Access

    - by Chris Hoffman
    Windows 8.1′s Assigned Access feature allows you to easily lock a Windows PC to a single application, such as a web browser. This feature makes it easy for anyone to configure Windows 8.1 devices as point-of-sale or other kiosk systems. In the past, setting up a Windows PC in kiosk mode involved much more work, requiring the use of third-party software, group policy, or Linux distributions designed around kiosk mode. Assigned Access is available on Windows 8.1 RT, Windows 8.1 Professional, and Windows 8.1 Enterprise. The standard edition of Windows 8.1 doesn’t support Assigned Access. Create a User Account for Assigned Access Rather than turn your entire computer into a locked-down kiosk system, Assigned Access allows you to create a separate user account that can only launch a single app — such as a web browser. To set this up, you must be logged into Windows as a user with administrator permissions. First, open the PC settings app — swipe in from the right or press Windows Key + C to open the charms bar, tap Settings, and tap Change PC settings. In the PC settings app, select Accounts and select Other accounts. Use the Add an account button to create a new Windows account. Select  the “Sign in without a Microsoft account” option and select Local account to create a local user account. You could also create a Microsoft account, but you may not want to do this if you just want a locked-down account with only browser access. If you need to install apps from the Windows Store to use in Assigned Access mode, you’ll have to set up a Microsoft account instead of a local account. A local account will still allow you access to the preinstalled apps, such as Internet Explorer. You may want to create a user account with a blank password. This would make it simple for anyone to access kiosk mode, even if the system becomes locked or needs to be rebooted. The account will be created as a standard user account with limited permissions. Leave it as a standard user account — don’t make it an administrator account. Set Up Assigned Access Once you’ve created an account, you’ll first need to sign into it. If you don’t, you’ll see a “This account has no apps” message when trying to enable Assigned Access. Go back to the welcome screen, log in to the new account you created, and allow Windows to go through the first-time account setup process. If you want to use a non-default app in kiosk mode, install it while logged in as that user account. Once you’re done, log out of the other account, log back in as your administrator account, and go back to the Other accounts screen. Click the Set up an account for assigned access option to continue. Select the user account you created and select the app you want to limit the account to. For a web-based kiosk, this can be a web browser such as the Modern version of Internet Explorer. Businesses can also create their own Modern apps and set them to run in kiosk mode in this way. Note that Microsoft’s documentation says “web browsers are not good choices for assigned access” because they require more permissions than average Modern (or “Windows Store”) apps. However, if you want to provide a kiosk for web-browsing, using Assigned Access is a much better option than using Guest Mode and offering up a full Windows desktop. When you’re done, restart your PC and log in as the Assigned Access account. Windows will automatically open the app you chose and won’t allow a user to leave that app. Standard Windows 8 features like the charms bar, app switcher, and Start screen won’t appear. Pressing the Windows key once will do nothing. To sign out of Assigned Access mode, press the Windows key five times — quickly — while signed in. You’ll be sent back to the standard login screen. The account will actually still be logged in and the app will remain running — this method just “locks” the screen and allows another user to log in. Automatically Log Into Assigned Access Whenever your Windows device boots, you can log into the Assigned Access account and turn it into a kiosk system. While this isn’t ideal for all kiosk systems, you may want the device to automatically launch the specific app when it boots without requiring any login process. To do so, you’ll just need to have Windows automatically log into the Assigned Access account when it boots. This option is hidden and not available in the standard Control Panel. You’ll need to use the hidden netplwiz Control Panel tool to set up automatic login on boot. If you didn’t create a password for the user account, leave the Password field empty while configuring this. Security Considerations If you’re using this feature to turn a Windows 8.1 system into a kiosk and leaving it open to the public, remember to consider security. Anyone could come up to the system, press the Windows key five times, and try to log into your standard administrator user account. Ensure the administrator user account has a strong password so people won’t be able to get past the kiosk system’s limitations and tamper with the system. Even Windows 8′s detractors have to admit that it’s an ideal system for a touch-screen kiosk device, running either a browser or another specific application. Assigned Access finally makes this easy to set up on Windows systems in the real world — no IT experience, third-party software, or Linux distributions necessary.     

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • HttpPost works in Java project, not in Android

    - by dave.c
    I've written some code for my Android device to login to a web site over https and parse some data out of the resulting pages. An HttpGet happens first to get some info needed for login, then an HttpPost to do the actual login process. The code below works great in a Java project within Eclipse which has the following Jar files on the build path: httpcore-4.1-beta2.jar, httpclient-4.1-alpha2.jar, httpmime-4.1-alpha2.jar, commons-logging-1.1.1.jar. public static MyBean gatherData(String username, String password) { MyBean myBean = new MyBean(); try { HttpResponse response = doHttpGet(URL_PAGE_LOGIN, null, null); System.out.println("Got login page"); String content = EntityUtils.toString(response.getEntity()); String token = ContentParser.getToken(content); String cookie = getCookie(response); System.out.println("Performing login"); System.out.println("token = "+token +" || cookie = "+cookie); response = doLoginPost(username,password,cookie, token); int respCode = response.getStatusLine().getStatusCode(); if (respCode != 302) { System.out.println("ERROR: not a 302 redirect!: code is \""+ respCode+"\""); if (respCode == 200) { System.out.println(getHeaders(response)); System.out.println(EntityUtils.toString(response.getEntity()).substring(0, 500)); } } else { System.out.println("Logged in OK, loading account home"); // redirect handler and rest of parse removed } }catch (Exception e) { System.out.println("ERROR in gatherdata: "+e.toString()); e.printStackTrace(); } return myBean; } private static HttpResponse doHttpGet(String url, String cookie, String referrer) { try { HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); HttpGet httpGet = new HttpGet(url); httpGet.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); httpGet.setHeader(HEADER_USER_AGENT,HEADER_USER_AGENT_VALUE); if (referrer != null && !referrer.equals("")) httpGet.setHeader(HEADER_REFERER,referrer); if (cookie != null && !cookie.equals("")) httpGet.setHeader(HEADER_COOKIE,cookie); return client.execute(httpGet); } catch (Exception e) { e.printStackTrace(); throw new ConnectException("Failed to read content from response"); } } private static HttpResponse doLoginPost(String username, String password, String cookie, String token) throws ClientProtocolException, IOException { try { HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); HttpPost post = new HttpPost(URL_LOGIN_SUBMIT); post.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); post.setHeader(HEADER_USER_AGENT,HEADER_USER_AGENT_VALUE); post.setHeader(HEADER_REFERER, URL_PAGE_LOGIN); post.setHeader(HEADER_COOKIE, cookie); post.setHeader("Content-Type","application/x-www-form-urlencoded"); List<NameValuePair> formParams = new ArrayList<NameValuePair>(); formParams.add(new BasicNameValuePair("org.apache.struts.taglib.html.TOKEN", token)); formParams.add(new BasicNameValuePair("showLogin", "true")); formParams.add(new BasicNameValuePair("upgrade", "")); formParams.add(new BasicNameValuePair("username", username)); formParams.add(new BasicNameValuePair("password", password)); formParams.add(new BasicNameValuePair("submit", "Secure+Log+in")); UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formParams,HTTP.UTF_8); post.setEntity(entity); return client.execute(post); } catch (Exception e) { e.printStackTrace(); throw new ConnectException("ERROR in doLoginPost(): "+e.getMessage()); } } The server (which is not under my control) returns a 302 redirect when the login was successful, and 200 if it fails and re-loads the login page. When run with the above Jar files I get the 302 redirect, however if I run the exact same code from an Android project with the 1.6 Android Jar file on the build path I get the 200 response from the server. I get the same 200 response when running the code on my 2.2 device. My android application has internet permissions, and the HttpGet works fine. I'm assuming that the problem lies in the fact that HttpPost (or some other class) is different in some significant way between the Android Jar version and the newer Apache versions. I've tried adding the Apache libraries to the build path of the Android project, but due to the duplicate classes I get messages like: INFO/dalvikvm(390): DexOpt: not resolving ambiguous class 'Lorg/apache/http/impl/client/DefaultHttpClient;' in the log. I've also tried using a MultipartEntity instead of the UrlEncodedFormEntity but I get the same 200 result. So, I have a few questions: - Can I force the code running under android to use the newer Apache libraries in preference to the Android versions? - If not, does anyone have any ideas how can I alter my code so that it works with the Android Jar? - Are there any other, totally different approaches to doing an HttpPost in Android? - Any other ideas? I've read a lot of posts and code but I'm not getting anywhere. I've been stuck on this for a couple of days and I'm at a loss how to get the thing to work, so I'll try anything at this point. Thanks in advance.

    Read the article

  • WP: AesManaged encryption vs. mcrypt_encrypt

    - by invalidusername
    I'm trying to synchronize my encryption and decryption methods between C# and PHP but something seems to be going wrong. In the Windows Phone 7 SDK you can use AESManaged to encrypt your data I use the following method: public static string EncryptA(string dataToEncrypt, string password, string salt) { AesManaged aes = null; MemoryStream memoryStream = null; CryptoStream cryptoStream = null; try { //Generate a Key based on a Password, Salt and HMACSHA1 pseudo-random number generator Rfc2898DeriveBytes rfc2898 = new Rfc2898DeriveBytes(password, Encoding.UTF8.GetBytes(salt)); //Create AES algorithm with 256 bit key and 128-bit block size aes = new AesManaged(); aes.Key = rfc2898.GetBytes(aes.KeySize / 8); aes.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; // rfc2898.GetBytes(aes.BlockSize / 8); // to check my results against those of PHP var blaat1 = Convert.ToBase64String(aes.Key); var blaat2 = Convert.ToBase64String(aes.IV); //Create Memory and Crypto Streams memoryStream = new MemoryStream(); cryptoStream = new CryptoStream(memoryStream, aes.CreateEncryptor(), CryptoStreamMode.Write); //Encrypt Data byte[] data = Encoding.Unicode.GetBytes(dataToEncrypt); cryptoStream.Write(data, 0, data.Length); cryptoStream.FlushFinalBlock(); //Return Base 64 String string result = Convert.ToBase64String(memoryStream.ToArray()); return result; } finally { if (cryptoStream != null) cryptoStream.Close(); if (memoryStream != null) memoryStream.Close(); if (aes != null) aes.Clear(); } } I solved the problem of generating the Key. The Key and IV are similar as those on the PHP end. But then the final step in the encryption is going wrong. here is my PHP code <?php function pbkdf2($p, $s, $c, $dk_len, $algo = 'sha1') { // experimentally determine h_len for the algorithm in question static $lengths; if (!isset($lengths[$algo])) { $lengths[$algo] = strlen(hash($algo, null, true)); } $h_len = $lengths[$algo]; if ($dk_len > (pow(2, 32) - 1) * $h_len) { return false; // derived key is too long } else { $l = ceil($dk_len / $h_len); // number of derived key blocks to compute $t = null; for ($i = 1; $i <= $l; $i++) { $f = $u = hash_hmac($algo, $s . pack('N', $i), $p, true); // first iterate for ($j = 1; $j < $c; $j++) { $f ^= ($u = hash_hmac($algo, $u, $p, true)); // xor each iterate } $t .= $f; // concatenate blocks of the derived key } return substr($t, 0, $dk_len); // return the derived key of correct length } } $password = 'test'; $salt = 'saltsalt'; $text = "texttoencrypt"; #$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_CBC); #echo $iv_size . '<br/>'; #$iv = mcrypt_create_iv($iv_size, MCRYPT_RAND); #print_r (mcrypt_list_algorithms()); $iv = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"; $key = pbkdf2($password, $salt, 1000, 32); echo 'key: ' . base64_encode($key) . '<br/>'; echo 'iv: ' . base64_encode($iv) . '<br/>'; echo '<br/><br/>'; function addpadding($string, $blocksize = 32){ $len = strlen($string); $pad = $blocksize - ($len % $blocksize); $string .= str_repeat(chr($pad), $pad); return $string; } echo 'text: ' . $text . '<br/>'; echo 'text: ' . addpadding($text) . '<br/>'; // -- works till here $crypttext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, $text, MCRYPT_MODE_CBC, $iv); echo '1.' . $crypttext . '<br/>'; $crypttext = base64_encode($crypttext); echo '2.' . $crypttext . '<br/>'; $crypttext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, addpadding($text), MCRYPT_MODE_CBC, $iv); echo '1.' . $crypttext . '<br/>'; $crypttext = base64_encode($crypttext); echo '2.' . $crypttext . '<br/>'; ?> So to point out, the Key and IV look similar on both .NET and PHP, but something seems to be going wrong in the final call when executing mcrypt_encrypt(). The end result, the encrypted string, differs from .NET. Can anybody tell me what i'm doing wrong. As far as i can see everything should be correct. Thank you! EDIT: Additional information on the AESManaged object in .NET Keysize = 256 Mode = CBC Padding = PKCS7

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • ASP.NET MVC + MySql Membership Provider, user cannot login

    - by Jason Miesionczek
    Hello, I've been playing around with using MySql as the membership provider for asp.net mvc forms authentication. I've got things configured correctly as far as i can tell, and i can create users via both the register action and asp.net web config site. however, when i try to login with one of the users, it does not work. it returns an error as if i had entered a wrong password, or if the account doesn't exist. i have verified in the database that the account does exist. I've followed the instructions here for reference: http://blog.tchami.com/post/ASPNET-MVC-2-and-MySQL-Membership-Provider.aspx here is my web.config: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --> <configuration> <connectionStrings> <add name="MySQLConn" connectionString="Server=localhost;Database=intereditor;Uid=<user>;Pwd=<password>;"/> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" name=".ASPXFORM$" path="/" requireSSL="false" slidingExpiration="true" enableCrossAppRedirects="false" /> </authentication> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider,MySql.Web,Version=6.3.4.0, Culture=neutral,PublicKeyToken=c5687fc88969c44d" autogenerateschema="true" connectionStringName="MySQLConn" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/" /> </providers> </membership> <profile defaultProvider="MySqlProfileProvider"> <providers> <clear/> <add name="MySqlProfileProvider" type="MySql.Web.Profile.MySQLProfileProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MySqlRoleProvider"> <providers> <clear /> <add name="MySqlRoleProvider" type="MySql.Web.Security.MySQLRoleProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Can anyone please help me identify what is wrong so that users can login? UPDATE So after debugging the login process in the code of the membership provider itself, i discovered that there is a bug in the provider. There is a discrepancy between the password hash that is stored in the database, and the has that is generated based on the inputted password. As a workaround for my issue, i changed the password format to 'encrpyted' and added a machine key to my web.config. I am still interested in figuring out the issue with the hashed format in the provider, and will spend some more time debugging it, and if i can figure out the problem, i will put together a patch and submit it.

    Read the article

  • Exception - Illegal Block size during decryption(Android)

    - by Vamsi
    I am writing an application which encrypts and decrypts the user notes based on the user set password. i used the following algorithms for encryption/decryption 1. PBEWithSHA256And256BitAES-CBC-BC 2. PBEWithMD5And128BitAES-CBC-OpenSSL e_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES-CBC-BC); d_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES-CBC-BC); e_Cipher.init() d_Cipher.init() encryption is working well, but when trying to decrypt it gives Exception - Illegal Block size after encryption i am converting the cipherText to HEX and storing it in a sqlite database. i am retrieving correct values from the sqlite database during decyption but when calling d_Cipher.dofinal() it throws the Exception. I thought i missed to specify the padding and tried to check what are the other available cipher algorithms but i was unable to found. so request you to please give the some knowledge on what are the cipher algorithms and padding that are supported by Android? if the algorithm which i used can be used for padding, how should i specify the padding mechanism? I am pretty new to Encryption so tried a couple of algorithms which are available in BouncyCastle.java but unsuccessful. As requested here is the code public class CryptoHelper { private static final String TAG = "CryptoHelper"; //private static final String PBEWithSHA256And256BitAES = "PBEWithSHA256And256BitAES-CBC-BC"; //private static final String PBEWithSHA256And256BitAES = "PBEWithMD5And128BitAES-CBC-OpenSSL"; private static final String PBEWithSHA256And256BitAES = "PBEWithMD5And128BitAES-CBC-OpenSSLPBEWITHSHA1AND3-KEYTRIPLEDES-CB"; private static final String randomAlgorithm = "SHA1PRNG"; public static final int SALT_LENGTH = 8; public static final int SALT_GEN_ITER_COUNT = 20; private final static String HEX = "0123456789ABCDEF"; private Cipher e_Cipher; private Cipher d_Cipher; private SecretKey secretKey; private byte salt[]; public CryptoHelper(String password) throws InvalidKeyException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidAlgorithmParameterException, InvalidKeySpecException { char[] cPassword = password.toCharArray(); PBEKeySpec pbeKeySpec = new PBEKeySpec(cPassword); PBEParameterSpec pbeParamSpec = new PBEParameterSpec(salt, SALT_GEN_ITER_COUNT); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBEWithSHA256And256BitAES); secretKey = keyFac.generateSecret(pbeKeySpec); SecureRandom saltGen = SecureRandom.getInstance(randomAlgorithm); this.salt = new byte[SALT_LENGTH]; saltGen.nextBytes(this.salt); e_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES); d_Cipher = Cipher.getInstance(PBEWithSHA256And256BitAES); e_Cipher.init(Cipher.ENCRYPT_MODE, secretKey, pbeParamSpec); d_Cipher.init(Cipher.DECRYPT_MODE, secretKey, pbeParamSpec); } public String encrypt(String cleartext) throws IllegalBlockSizeException, BadPaddingException { byte[] encrypted = e_Cipher.doFinal(cleartext.getBytes()); return convertByteArrayToHex(encrypted); } public String decrypt(String cipherString) throws IllegalBlockSizeException { byte[] plainText = decrypt(convertStringtobyte(cipherString)); return(new String(plainText)); } public byte[] decrypt(byte[] ciphertext) throws IllegalBlockSizeException { byte[] retVal = {(byte)0x00}; try { retVal = d_Cipher.doFinal(ciphertext); } catch (BadPaddingException e) { Log.e(TAG, e.toString()); } return retVal; } public String convertByteArrayToHex(byte[] buf) { if (buf == null) return ""; StringBuffer result = new StringBuffer(2*buf.length); for (int i = 0; i < buf.length; i++) { appendHex(result, buf[i]); } return result.toString(); } private static void appendHex(StringBuffer sb, byte b) { sb.append(HEX.charAt((b>>4)&0x0f)).append(HEX.charAt(b&0x0f)); } private static byte[] convertStringtobyte(String hexString) { int len = hexString.length()/2; byte[] result = new byte[len]; for (int i = 0; i < len; i++) { result[i] = Integer.valueOf(hexString.substring(2*i, 2*i+2), 16).byteValue(); } return result; } public byte[] getSalt() { return salt; } public SecretKey getSecretKey() { return secretKey; } public static SecretKey createSecretKey(char[] password) throws NoSuchAlgorithmException, InvalidKeySpecException { PBEKeySpec pbeKeySpec = new PBEKeySpec(password); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBEWithSHA256And256BitAES); return keyFac.generateSecret(pbeKeySpec); } } I will call mCryptoHelper.decrypt(String str) then this results in Illegal block size exception My Env: Android 1.6 on Eclipse

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • MVC 3 Remote Validation jQuery error on submit

    - by Richard Reddy
    I seem to have a weird issue with remote validation on my project. I am doing a simple validation check on an email field to ensure that it is unique. I've noticed that unless I put the cursor into the textbox and then remove it to trigger the validation at least once before submitting my form I will get a javascript error. e[h] is not a function jquery.min.js line 3 If I try to resubmit the form after the above error is returned everything works as expected. It's almost like the form tried to submit before waiting for the validation to return or something. Am I required to silently fire off a remote validation request on submit before submitting my form? Below is a snapshot of the code I'm using: (I've also tried GET instead of POST but I get the same result). As mentioned above, the code works fine but the form returns a jquery error unless the validation is triggered at least once. Model: public class RegisterModel { [Required] [Remote("DoesUserNameExist", "Account", HttpMethod = "POST", ErrorMessage = "User name taken.")] [Display(Name = "User name")] public string UserName { get; set; } [Required] [Display(Name = "Firstname")] public string Firstname { get; set; } [Display(Name = "Surname")] public string Surname { get; set; } [Required] [Remote("DoesEmailExist", "Account", HttpMethod = "POST", ErrorMessage = "Email taken.", AdditionalFields = "UserName")] [Display(Name = "Email address")] public string Email { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 8)] [DataType(DataType.Password)] [Display(Name = "Confirm password")] public string ConfirmPassword { get; set; } [Display(Name = "Approved?")] public bool IsApproved { get; set; } } public class UserRoleModel { [Display(Name = "Assign Roles")] public IEnumerable<RoleViewModel> AllRoles { get; set; } public RegisterModel RegisterUser { get; set; } } Controller: // POST: /Account/DoesEmailExist // passing in username so that I can ignore the same email address for the same user on edit page [HttpPost] public JsonResult DoesEmailExist([Bind(Prefix = "RegisterUser.Email")]string Email, [Bind(Prefix = "RegisterUser.UserName")]string UserName) { var user = Membership.GetUserNameByEmail(Email); if (!String.IsNullOrEmpty(UserName)) { if (user == UserName) return Json(true); } return Json(user == null); } View: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript" src="/Content/web/js/jquery.unobtrusive-ajax.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.min.js"></script> <script type="text/javascript" src="/Content/web/js/jquery.validate.unobtrusive.min.js"></script> ...... @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="titleh"> <h3>Edit a user account</h3> </div> <div class="body"> @Html.HiddenFor(model => model.RegisterUser.UserName) @Html.Partial("_CreateOrEdit", Model) <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(model => model.RegisterUser.IsApproved)</span> @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, true, new { @class = "uniform" }) Active @Html.RadioButtonFor(model => model.RegisterUser.IsApproved, false, new { @class = "uniform" }) Disabled <div class="clear"></div> </div> <div class="button-box"> <input type="submit" name="submit" value="Save" class="st-button"/> @Html.ActionLink("Back to List", "Index", null, new { @class = "st-clear" }) </div> </div> } CreateEdit Partial View @model Project.Domain.Entities.UserRoleModel <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Firstname)</span> @Html.TextBoxFor(m => m.RegisterUser.Firstname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Firstname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Surname)</span> @Html.TextBoxFor(m => m.RegisterUser.Surname, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Surname) <div class="clear"></div> </div> <div class="st-form-line"> <span class="st-labeltext">@Html.LabelFor(m => m.RegisterUser.Email)</span> @Html.TextBoxFor(m => m.RegisterUser.Email, new { @class = "st-forminput", @style = "width:300px" }) @Html.ValidationMessageFor(m => m.RegisterUser.Email) <div class="clear"></div> </div> Thanks, Rich

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • A Few of My Favorite HTML5 and CSS3 Online Tools

    - by dwahlin
    I really enjoy coding up HTML5, CSS3, and JavaScript applications but there are some things that I’m better off writing with the help of a development tool. For example, CSS3 gradients aren’t exactly the most fun thing to write by hand and the same could be said for animations, transforms, or styles that require various vendor extensions. There are a lot of online tools that can simplify building HTML5/CSS3 sites and increase productivity in the process so I thought I’d put together a post on a few of my favorites tools. HTML5 Boilerplate HTML5 Boilerplate provides a great way to get started building HTML5 sites. It includes many best practices out of the box and even includes a few tricks that many people don’t even know about. The custom download option allows you to pick the features that you want to include in the files that’s generated. You can read more about it here.   Initializr Although HTML5 Boilerplate provides a great foundation for starting HTML5 sites, it focuses on providing a starting shell structure (namely an html page, JavaScript files, and a CSS stylesheet) and doesn’t include much in the way of page content to get started with. Initializer builds on HTML5 Boilerplate and provides an initial test page that can be tweaked to meet your needs. It also provides several different customization options to include/exclude features. CSS3 Maker CSS3 provides a lot of great features ranging from gradient support to rounded corners. Although many of the features are fairly straightforward there are some that are pretty involved such as gradients, animations, and really any styles that require custom vendor extensions to use across browsers. Sure, you can type everything by hand, but sites such as CSS3 Maker provide a visual way to generate CSS3 styles. CSS3, Please! CSS3, Please! is a code generation tool that can be used to generate cross-browser CSS3 styles quickly and easily. All of the main things you can do with CSS3 are available including a clever way to visually generate CSS3 transform styles.       Ultimate CSS Gradient Generator CSS3 Maker (above) has a gradient generator built-in but my favorite tool for creating CSS3 gradients is the Ultimate CSS Gradient Generator. If you’ve created gradients in tools like Photoshop then you’ll love what this tool has to offer especially since it makes it extremely straightforward to work with different gradient stops. @font-face Fonts Although @font-face has been available for awhile, I think fonts are cool and wanted to mention a site that provides a lot of font choices. When used correctly fonts can really enhance a page and when used incorrectly (think Comic Sans) they can absolutely ruin a page. Several sites exist that provide fonts that can be used with @font-face definitions in CSS style sheets. One of my favorites is Font Squirrel.   HTML5 & CSS3 Support and Tests Interested in knowing what HTML5 and CSS3 features a given browser supports? Want to know how various browsers stack up with each other as far as HTML5/CSS3 support. Look no further than the HTML5 & CSS3 Support page or the HTML5 Test page.   CSS3 Easing Animation Tool CSS3 animations aren’t widely supported across browsers right now (I’m not really using them at this point) but they do offer a lot of promise. Creating easings for animations can definitely be a challenge but they’re something that are critical for adding that “professional touch” to your animations. Fortunately you can use the Ceaser CSS Easing Animation Tool to simplify the process and handle animation easing with…...ease.   There are several other online tools that I like but these are some of the ones I find myself using the most. If you have any favorite online tools that simplify working with HTML5 or CSS3 let me know.     For more information about onsite or online training, mentoring and consulting solutions for HTML5, jQuery, .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >