Search Results

Search found 1660 results on 67 pages for 'region'.

Page 61/67 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • problem with JsonStore and JsonReader

    - by kalan
    Hello, In my ExtJS application I use EditorGridPanel to show data from server. var applicationsGrid = new Ext.grid.EditorGridPanel({ region: 'west', layout: 'fit', title: '<img src="../../Content/img/app.png" /> ??????????', collapsible: true, margins: '0 0 5 5', split: true, width: '30%', listeners: { 'viewready': { fn: function() { applicationsGridStatusBar.setText('??????????: ' + applicationsStore.getTotalCount()); } } }, store: applicationsStore, loadMask: { msg: '????????...' }, sm: new Ext.grid.RowSelectionModel({ singleSelect: true, listeners: { 'rowselect': { fn: applicationsGrid_onRowSelect} } }), viewConfig: { forceFit: true }, tbar: [{ icon: '../../Content/img/add.gif', text: '????????' }, '-', { icon: '../../Content/img/delete.gif', text: '???????' }, '-'], bbar: applicationsGridStatusBar, columns: [{ header: '??????????', dataIndex: 'ApplicationName', tooltip: '???????????? ??????????', sortable: true, editor: { xtype: 'textfield', allowBlank: false } }, { header: '<img src="../../Content/img/user.png" />', dataIndex: 'UsersCount', align: 'center', fixed: true, width: 50, tooltip: '?????????? ????????????? ??????????', sortable: true }, { header: '<img src="../../Content/img/role.gif" />', dataIndex: 'RolesCount', align: 'center', fixed: true, width: 50, tooltip: '?????????? ????? ??????????', sortable: true}] }); When I use JsonStore without reader it works, but when I try to update any field it uses my 'create' url instead of 'update'. var applicationsStore = new Ext.data.JsonStore({ root: 'applications', totalProperty: 'total', idProperty: 'ApplicationId', messageProperty: 'message', fields: [{ name: 'ApplicationId' }, { name: 'ApplicationName', allowBlank: false }, { name: 'UsersCount', allowBlank: false }, { name: 'RolesCount', allowBlank: false}], id: 'app1234', proxy: new Ext.data.HttpProxy({ api: { create: '/api/applications/getapplicationslist1', read: '/api/applications/getapplicationslist', update: '/api/applications/getapplicationslist2', destroy: '/api/applications/getapplicationslist3' } }), autoSave: true, autoLoad: true, writer: new Ext.data.JsonWriter({ encode: false, listful: false, writeAllFields: false }) }); I believe that the problem is that I don't use reader, but when I use JsonReader grid stops showing any data at all. var applicationReader = new Ext.data.JsonReader({ root: 'applications', totalProperty: 'total', idProperty: 'ApplicationId', messageProperty: 'message', fields: [{ name: 'ApplicationId' }, { name: 'ApplicationName', allowBlank: false }, { name: 'UsersCount', allowBlank: false }, { name: 'RolesCount', allowBlank: false}] }); var applicationsStore = new Ext.data.JsonStore({ id: 'app1234', proxy: new Ext.data.HttpProxy({ api: { create: '/api/applications/getapplicationslist1', read: '/api/applications/getapplicationslist', update: '/api/applications/getapplicationslist2', destroy: '/api/applications/getapplicationslist3' } }), reader: applicationReader, autoSave: true, autoLoad: true, writer: new Ext.data.JsonWriter({ encode: false, listful: false, writeAllFields: false }) }); So, does anyone know what the problem might be and how to solve it. The data returned from my server is Json-formated and seams to be ok {"message":"test","total":2,"applications":[{"ApplicationId":"f82dc920-17e7-45b5-98ab-03416fdf52b2","ApplicationName":"Archivist","UsersCount":6,"RolesCount":3},{"ApplicationId":"054e2e78-e15f-4609-a9b2-81c04aa570c8","ApplicationName":"Test","UsersCount":1,"RolesCount":0}]}

    Read the article

  • CLR: Multi Param Aggregate, Argument not in Final Output?

    - by OMG Ponies
    Why is my delimiter not appearing in the final output? It's initialized to be a comma, but I only get ~5 white spaces between each attribute using: SELECT [article_id] , dbo.GROUP_CONCAT(0, t.tag_name, ',') AS col FROM [AdventureWorks].[dbo].[ARTICLE_TAG_XREF] atx JOIN [AdventureWorks].[dbo].[TAGS] t ON t.tag_id = atx.tag_id GROUP BY article_id The bit for DISTINCT works fine, but it operates within the Accumulate scope... Output: article_id | col ------------------------------------------------- 1 | a a b c I only have rudimentary C# API knowledge... C# Code: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Xml.Serialization; using System.Xml; using System.IO; using System.Collections; using System.Text; [Serializable] [SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = 8000)] public struct GROUP_CONCAT : IBinarySerialize { ArrayList list; string delimiter; public void Init() { list = new ArrayList(); delimiter = ","; } public void Accumulate(SqlBoolean isDistinct, SqlString Value, SqlString separator) { delimiter = (separator.IsNull) ? "," : separator.Value ; if (!Value.IsNull) { if (isDistinct) { if (!list.Contains(Value.Value)) { list.Add(Value.Value); } } else { list.Add(Value.Value); } } } public void Merge(GROUP_CONCAT Group) { list.AddRange(Group.list); } public SqlString Terminate() { string[] strings = new string[list.Count]; for (int i = 0; i < list.Count; i++) { strings[i] = list[i].ToString(); } return new SqlString(string.Join(delimiter, strings)); } #region IBinarySerialize Members public void Read(BinaryReader r) { int itemCount = r.ReadInt32(); list = new ArrayList(itemCount); for (int i = 0; i < itemCount; i++) { this.list.Add(r.ReadString()); } } public void Write(BinaryWriter w) { w.Write(list.Count); foreach (string s in list) { w.Write(s); } } #endregion }

    Read the article

  • Is there anything wrong with having a few private methods exposing IQueryable<T> and all public meth

    - by Nate Bross
    I'm wondering if there is a better way to approach this problem. The objective is to reuse code. Let’s say that I have a Linq-To-SQL datacontext and I've written a "repository style" class that wraps up a lot of the methods I need and exposes IQueryables. (so far, no problem). Now, I'm building a service layer to sit on top of this repository, many of the service methods will be 1<-1 with repository methods, but some will not. I think a code sample will illustrate this better than words. public class ServiceLayer { MyClassDataContext context; IMyRepository rpo; public ServiceLayer(MyClassDataContext ctx) { context = ctx; rpo = new MyRepository(context); } private IQueryable<MyClass> ReadAllMyClass() { // pretend there is some complex business logic here // and maybe some filtering of the current users access to "all" // that I don't want to repeat in all of the public methods that access // MyClass objects. return rpo.ReadAllMyClass(); } public IEnumerable<MyClass> GetAllMyClass() { // call private IQueryable so we can do attional "in-database" processing return this.ReadAllMyClass(); } public IEnumerable<MyClass> GetActiveMyClass() { // call private IQueryable so we can do attional "in-database" processing // in this case a .Where() clause return this.ReadAllMyClass().Where(mc => mc.IsActive.Equals(true)); } #region "Something my class MAY need to do in the future" private IQueryable<MyOtherTable> ReadAllMyOtherTable() { // there could be additional constrains which define // "all" for the current user return context.MyOtherTable; } public IEnumerable<MyOtherTable> GetAllMyOtherTable() { return this.ReadAllMyOtherTable(); } public IEnumerable<MyOtherTable> GetInactiveOtherTable() { return this.ReadAllMyOtherTable.Where(ot => ot.IsActive.Equals(false)); } #endregion } This particular case is not the best illustration, since I could just call the repository directly in the GetActiveMyClass method, but let’s presume that my private IQueryable does some extra processing and business logic that I don't want to replicate in both of my public methods. Is that a bad way to attack an issue like this? I don't see it being so complex that it really warrants building a third class to sit between the repository and the service class, but I'd like to get your thoughts. For the sake of argument, lets presume two additional things. This service is going to be exposed through WCF and that each of these public IEnumerable methods will be calling a .Select(m => m.ToViewModel()) on each returned collection which will convert it to a POCO for serialization. The service will eventually need to expose some context.SomeOtherTable which wont be wrapped into the repository.

    Read the article

  • Suggestions In Porting ASP.NET to MVC.NET - Is storing SiteConfiguration in Cache RESTful?

    - by DaveDev
    I've been tasked with porting/refactoring a Web Application Platform that we have from ASP.NET to MVC.NET. Ideally I could use all the existing platform's configurations to determine the properties of the site that is presented. Is it RESTful to keep a SiteConfiguration object which contains all of our various page configuration data in the System.Web.Caching.Cache? There are a lot of settings that need to be loaded when the user acceses our site so it's inefficient for each user to have to load the same settings every time they access. Some data the SiteConfiguration object contains is as follows and it determines what Master Page / site configuration / style / UserControls are available to the client, public string SiteTheme { get; set; } public string Region { private get; set; } public string DateFormat { get; set; } public string NumberFormat { get; set; } public int WrapperType { private get; set; } public string LabelFileName { get; set; } public LabelFile LabelFile { get; set; } // the following two are the heavy ones // PageConfiguration contains lots of configuration data for each panel on the page public IList<PageConfiguration> Pages { get; set; } // This contains all the configurations for the factsheets we produce public List<ConfiguredFactsheet> ConfiguredFactsheets { get; set; } I was thinking of having a URL structure like this: www.MySite1.com/PageTemplate/UserControl/ the domain determines the SiteConfiguration object that is created, where MySite1.com is SiteId = 1, MySite2.com is SiteId = 2. (and in turn, style, configurations for various pages, etc.) PageTemplate is the View that will be rendered and simply defines a layout for where I'm going to inject the UserControls Can somebody please tell me if I'm completely missing the RESTful point here? I'd like to refactor the platform into MVC because it's better to work in but I want to do it right but with a minimum of reinventing-the-wheel because otherwise it won't get approval. Any suggestions otherwise? Thanks

    Read the article

  • Data Annotations validation Built into model

    - by Josh
    I want to build an object model that automatically wires in validation when I attempt to save an object. I am using DataAnnotations for my validation, and it all works well, but I think my inheritance is whacked. I am looking here for some guidance on a better way to wire in my validation. So, to build in validation I have this interface public interface IValidatable { bool IsValid { get; } ValidationResponse ValidationResults { get; } void Validate(); } Then, I have a base class that all my objects inherit from. I did a class because I wanted to wire in the validation calls automatically. The issue is that the validation has to know the type of the class is it validating. So I use Generics like so. public class CoreObjectBase<T> : IValidatable where T : CoreObjectBase<T> { #region IValidatable Members public virtual bool IsValid { get { // First, check rules that always apply to this type var result = new Validator<T>().Validate((T)this); // return false if any violations occurred return !result.HasViolations; } } public virtual ValidationResponse ValidationResults { get { var result = new Validator<T>().Validate((T)this); return result; } } public virtual void Validate() { // First, check rules that always apply to this type var result = new Validator<T>().Validate((T)this); // throw error if any violations were detected if (result.HasViolations) throw new RulesException(result.Errors); } #endregion } So, I have this circular inheritance statement. My classes look like this then: public class MyClass : CoreObjectBase<MyClass> { } But the problem occurs when I have a more complicated model. Because I can only inherit from one class, when I have a situation where inheritance makes sense I believe the child classes won't have validation on their properties. public class Parent : CoreObjectBase<Parent> { //properties validated } public class Child : Parent { //properties not validated? } I haven't really tested the validation in these cases yet, but I am pretty sure that anything in child with a data annotation on it will not be automatically validated when I call Child.Validate(); due to the way the inheritance is configured. Is there a better way to do this?

    Read the article

  • WPF/INotifyPropertyChanged, change the value of txtA, the txtB and txtC should change automatically?

    - by user1033098
    I supposed, once i change the value of txtA, the txtB and txtC would change automatically, since i have implemented INotifyPropertyChanged for ValueA. But they were not updated on UI. txtB was always 100, and txtC was always -50. I don't know what's the reason. My Xaml.. <Window x:Class="WpfApplicationReviewDemo.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <StackPanel> <TextBox Name="txtA" Text="{Binding ValueA}" /> <TextBox Name="txtB" Text="{Binding ValueB}" /> <TextBox Name="txtC" Text="{Binding ValueC}" /> </StackPanel> </Window> My code behind... public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); this.DataContext = new Model(); } } public class Model : INotifyPropertyChanged { private decimal valueA; public decimal ValueA { get { return valueA; } set { valueA = value; PropertyChanged(this, new PropertyChangedEventArgs("ValueA")); } } private decimal valueB; public decimal ValueB { get { valueB = ValueA + 100; return valueB; } set { valueB = value; PropertyChanged(this, new PropertyChangedEventArgs("ValueB")); } } private decimal valueC; public decimal ValueC { get { valueC = ValueA - 50; return valueC; } set { valueC = value; PropertyChanged(this, new PropertyChangedEventArgs("ValueC")); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion } After i add code to the set method of ValueA property, it works. public decimal ValueA { get { return valueA; } set { valueA = value; PropertyChanged(this, new PropertyChangedEventArgs("ValueA")); PropertyChanged(this, new PropertyChangedEventArgs("ValueB")); PropertyChanged(this, new PropertyChangedEventArgs("ValueC")); } } But I supposed it should be automatically refresh/updated for txtB and txtC. Please advise.

    Read the article

  • SQL Server 2008 R2 Enterprise won't install on Windows 2008 R2 Enterprise

    - by Carlos Paulino
    I've been trying to install SQL Server on a new Windows Server 2008. I have tried everything but I haven't been able to narrow down the problem. When the installation fails I get " Exit code (Decimal): -2068643839". The problem with this is that according to Microsoft this is a generic error code. I follow their guide to look into the detail.txt inside C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\ But I can't find something that specifies the exact error. Any suggestions ? Thanks in advanced. I uploaded to detail.txt to http://www.megaupload.com/?d=0MV46SZH because it is to big to paste here. Below is the summary.txt ---------- Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2011-02-28 11:29:56 End time: 2011-02-28 11:34:45 Requested action: Install Machine Properties: Machine name: SA-SERVER Machine processor count: 8 OS version: Windows Server 2008 R2 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: F:\x64\setup\ Installation edition: ENTERPRISE User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\SYSTEM AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Manual ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: <empty> ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: <empty> ASSVCPASSWORD: ***** ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini CUSOURCE: ENABLERANU: False ENU: True ERRORREPORTING: False FARMACCOUNT: <empty> FARMADMINPORT: 0 FARMPASSWORD: ***** FEATURES: SQLENGINE,BIDS,CONN,IS,BC,SDK,SSMS,ADV_SSMS,SNAC_SDK,OCS FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: ***** HELP: False IACCEPTSQLSERVERLICENSETERMS: False INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: D:\SQLServer INSTANCEID: MSSQLSERVER INSTANCENAME: MSSQLSERVER ISSVCACCOUNT: NT AUTHORITY\SYSTEM ISSVCPASSWORD: ***** ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: ***** PCUSOURCE: PID: ***** QUIET: False QUIETSIMPLE: False ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: ***** RSSVCSTARTUPTYPE: Automatic SAPWD: ***** SECURITYMODE: SQL SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT AUTHORITY\SYSTEM SQLSVCPASSWORD: ***** SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SA-SERVER\Administrator SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: False TCPENABLED: 1 UIMODE: Normal X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Integration Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Connectivity Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Complete Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Backwards Compatibility Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Business Intelligence Development Studio Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Microsoft Sync Framework Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\SystemConfigurationCheck_Report.htm

    Read the article

  • Linux bcm43224 wifi adapter slows down a couple minutes after boot

    - by Blubber
    I just installed Ubuntu on my mid 2012 MacBook Air. Everything worked out of the box, but the wifi is showing some weird behavior. When I first login it's really fast, loading google.com is near instant, and browsing in general feels at least as smooth as it did on Mac OS. However, after a couple minutes the connection slows down dramatically, sometimes it takes over 5s to load google.com, a simple reboot fixes the problem for another couple minutes. Specs: Wifi: 02:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) Driver: open-source brcmsmac driver Kernel: Linux wega 3.8.0-21-generic #32-Ubuntu SMP Tue May 14 22:16:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distro: Ubuntu 13.04 (uptodate) I tried a number of things, none of which actually helped Use proprietary sta driver from broadcom Installed firmware into /lib/firmware/brcms (which, as far as I can tell from logs, does not get loaded at all) Switch router to only use 2.4 OR 5 GHz Set router to only use a OR g OR n Set router to use AES encryption only Turned off power management on the adapter Set regulatory region to the correct value (NL) on both router and laptop Disable ipv6 Nothing seems to help, the slowdown always occurs. I did notice that the latency (ping google.com) stays roughly the same (around 9ms). Below is some more information that might be of use. $ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) Subsystem: Apple Inc. Device [106b:00e9] Kernel driver in use: bcma-pci-bridge $ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ lsmod Module Size Used by dm_crypt 22820 1 arc4 12615 2 brcmsmac 550698 0 coretemp 13355 0 kvm_intel 132891 0 parport_pc 28152 0 kvm 443165 1 kvm_intel ppdev 17073 0 cordic 12574 1 brcmsmac brcmutil 14755 1 brcmsmac mac80211 606457 1 brcmsmac cfg80211 510937 2 brcmsmac,mac80211 bnep 18036 2 rfcomm 42641 12 joydev 17377 0 applesmc 19353 0 input_polldev 13896 1 applesmc snd_hda_codec_hdmi 36913 1 microcode 22881 0 snd_hda_codec_cirrus 23829 1 nls_iso8859_1 12713 1 uvcvideo 80847 0 btusb 22474 0 snd_hda_intel 39619 3 videobuf2_vmalloc 13056 1 uvcvideo snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_cirrus bcm5974 17347 0 bluetooth 228619 22 bnep,btusb,rfcomm snd_hwdep 13602 1 snd_hda_codec lpc_ich 17061 0 videobuf2_memops 13202 1 videobuf2_vmalloc videobuf2_core 40513 1 uvcvideo videodev 129260 2 uvcvideo,videobuf2_core bcma 41051 1 brcmsmac snd_pcm 97451 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 16 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_hda_codec_cirrus mei 41158 0 soundcore 12680 1 snd apple_bl 13673 0 mac_hid 13205 0 lp 17759 0 parport 46345 3 lp,ppdev,parport_pc usb_storage 57204 0 hid_apple 13237 0 hid_generic 12540 0 ghash_clmulni_intel 13259 0 aesni_intel 55399 399 aes_x86_64 17255 1 aesni_intel xts 12885 1 aesni_intel lrw 13257 1 aesni_intel gf128mul 14951 2 lrw,xts ablk_helper 13597 1 aesni_intel cryptd 20373 4 ghash_clmulni_intel,aesni_intel,ablk_helper i915 600351 3 ahci 25731 3 libahci 31364 1 ahci video 19390 1 i915 i2c_algo_bit 13413 1 i915 drm_kms_helper 49394 1 i915 usbhid 47074 0 drm 286313 4 i915,drm_kms_helper hid 101002 3 hid_generic,usbhid,hid_apple $ dmesg | egrep 'b43|bcma|brcm|[F]irm' [ 0.055025] [Firmware Bug]: ioapic 2 has no mapping iommu, interrupt remapping will be disabled [ 0.152336] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 2.187681] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-99] only partially covers this bridge [ 12.553600] bcma-pci-bridge 0000:02:00.0: enabling device (0000 -> 0002) [ 12.553657] bcma: bus0: Found chip with id 0xA8D8, rev 0x01 and package 0x08 [ 12.553688] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0) [ 12.553715] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0) [ 12.553764] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0) [ 12.605777] bcma: bus0: Bus registered [ 12.852925] brcmsmac bcma0:0: mfg 4bf core 812 rev 23 class 0 irq 17 [ 13.085176] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: false (implement) [ 13.085186] brcmsmac bcma0:0: brcms_ops_config: change power-save mode: false (implement) [ 20.862617] brcmsmac bcma0:0: brcmsmac: brcms_ops_bss_info_changed: associated [ 20.862622] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 20.862625] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 20.897957] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"wlan" Mode:Managed Frequency:5.22 GHz Access Point: E0:46:9A:4E:63:9A Bit Rate=65 Mb/s Tx-Power=17 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=63/70 Signal level=-47 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:13 Invalid misc:56 Missed beacon:0

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET Controls – CommunityServer Captcha ControlAdapter, a practical case

    - by nmgomes
    The ControlAdapter is available since .NET framework version 2.0 and his main goal is to adapt and customize a control render in order to achieve a specific behavior or layout. This customization is done without changing the base control. A ControlAdapter is commonly used to custom render for specific platforms like Mobile. In this particular case the ControlAdapter was used to add a specific behavior to a Control. In this  post I will use one adapter to add a Captcha to all WeblogPostCommentForm controls within pontonetpt.com CommunityServer instance. The Challenge The ControlAdapter complexity is usually associated with the complexity/structure of is base control. This case is precisely one of those since base control dynamically load his content (controls) thru several ITemplate. Those of you who already played with ITemplate knows that while it is an excellent option for control composition it also brings to the table a big issue: “Controls defined within a template are not available for manipulation until they are instantiated inside another control.” While analyzing the WeblogPostCommentForm control I found that he uses the ITemplate technique to compose it’s layout and unfortunately I also found that the template content vary from theme to theme. This could have been a problem but luckily WeblogPostCommentForm control template content always contains a submit button with a well known ID (at least I can assume that there are a well known set of IDs). Using this submit button as anchor it’s possible to add the Captcha controls in the correct place. Another important finding was that WeblogPostCommentForm control inherits from the WrappedFormBase control which is the base control for all CommunityServer input forms. Knowing this inheritance link the main goal has changed to became the creation of a base ControlAdapter that  could be extended and customized to allow adding Captcha to: post comments form contact form user creation form. And, with this mind set, I decided to used the following ControlAdapter base class signature :public abstract class WrappedFormBaseCaptchaAdapter<T> : ControlAdapter where T : WrappedFormBase { }Great, but there are still many to do … Captcha The Captcha will be assembled with: A dynamically generated image with a set of random numbers A TextBox control where the image number will be inserted A Validator control to validate whether TextBox numbers match the image numbers This is a common Captcha implementation, is not rocket science and don’t bring any additional problem. The main problem, as told before, is to find the correct anchor control to ensure a correct Captcha control injection. The anchor control can vary by: target control  theme Implementation To support this dynamic scenario I choose to use the following implementation:private List<string> _validAnchorIds = null; protected virtual List<string> ValidAnchorIds { get { if (this._validAnchorIds == null) { this._validAnchorIds = new List<string>(); this._validAnchorIds.Add("btnSubmit"); } return this._validAnchorIds; } } private Control GetAnchorControl(T wrapper) { if (this.ValidAnchorIds == null || this.ValidAnchorIds.Count == 0) { throw new ArgumentException("Cannot be null or empty", "validAnchorNames"); } var q = from anchorId in this.ValidAnchorIds let anchorControl = CSControlUtility.Instance().FindControl(wrapper, anchorId) where anchorControl != null select anchorControl; return q.FirstOrDefault(); } I can now, using the ValidAnchorIds property, configure a set of valid anchor control  Ids. The GetAnchorControl method searches for a valid anchor control within the set of valid control Ids. Here, some of you may question why to use a LINQ To Objects expression, but the important here is to notice the usage of CSControlUtility.Instance().FindControl CommunityServer method. I want to build on top of CommunityServer not to reinvent the wheel. Assuming that an anchor control was found, it’s now possible to inject the Captcha at the correct place. This not something new, we do this all the time when creating server controls or adding dynamic controls:protected sealed override void CreateChildControls() { base.CreateChildControls(); if (this.IsCaptchaRequired) { T wrapper = base.Control as T; if (wrapper != null) { Control anchorControl = GetAnchorControl(wrapper); if (anchorControl != null) { Panel phCaptcha = new Panel {CssClass = "CommonFormField", ID = "Captcha"}; int index = anchorControl.Parent.Controls.IndexOf(anchorControl); anchorControl.Parent.Controls.AddAt(index, phCaptcha); CaptchaConfiguration.DefaultProvider.AddCaptchaControls( phCaptcha, GetValidationGroup(wrapper, anchorControl)); } } } } Here you can see a new entity in action: a provider. This is a CaptchaProvider class instance and is only goal is to create the Captcha itself and do everything else is needed to ensure is correct operation.public abstract class CaptchaProvider : ProviderBase { public abstract void AddCaptchaControls(Panel captchaPanel, string validationGroup); } You can create your own specific CaptchaProvider class to use different Captcha strategies including the use of existing Captcha services  like ReCaptcha. Once the generic ControlAdapter was created became extremely easy to created a specific one. Here is the specific ControlAdapter for the WeblogPostCommentForm control:public class WeblogPostCommentFormCaptchaAdapter : WrappedFormBaseCaptchaAdapter<WrappedFormBase> { #region Overriden Methods protected override List<string> ValidAnchorIds { get { List<string> validAnchorNames = base.ValidAnchorIds; validAnchorNames.Add("CommentSubmit"); return validAnchorNames; } } protected override string DefaultValidationGroup { get { return "CreateCommentForm"; } } #endregion Overriden Methods } Configuration This is the magic step. Without changing the original pages and keeping the application original assemblies untouched we are going to add a new behavior to the CommunityServer application. To glue everything together you must follow this steps: Add the following configuration to default.browser file:<?xml version='1.0' encoding='utf-8'?> <browsers> <browser refID="Default"> <controlAdapters> <!-- Adapter for the WeblogPostCommentForm control in order to add the Captcha and prevent SPAM comments --> <adapter controlType="CommunityServer.Blogs.Controls.WeblogPostCommentForm" adapterType="NunoGomes.CommunityServer.Components.WeblogPostCommentFormCaptchaAdapter, NunoGomes.CommunityServer" /> </controlAdapters> </browser> </browsers> Add the following configuration to web.config file:<configuration> <configSections> <!-- New section for Captcha providers configuration --> <section name="communityServer.Captcha" type="NunoGomes.CommunityServer.Captcha.Configuration.CaptchaSection" /> </configSections> <!-- Configuring a simple Captcha provider --> <communityServer.Captcha defaultProvider="simpleCaptcha"> <providers> <add name="simpleCaptcha" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProvider, NunoGomes.CommunityServer" imageUrl="~/captcha.ashx" enabled="true" passPhrase="_YourPassPhrase_" saltValue="_YourSaltValue_" hashAlgorithm="SHA1" passwordIterations="3" keySize="256" initVector="_YourInitVectorWithExactly_16_Bytes_" /> </providers> </communityServer.Captcha> <system.web> <httpHandlers> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </httpHandlers> </system.web> <system.webServer> <handlers accessPolicy="Read, Write, Script, Execute"> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" name="captcha" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </handlers> </system.webServer> </configuration> Conclusion Building a ControlAdapter can be complex but the reward is his ability to allows us, thru configuration changes, to modify an application render and/or behavior. You can see this ControlAdapter in action here and here (anonymous required). A complete solution is available in “CommunityServer Extensions” Codeplex project.

    Read the article

  • CodePlex Daily Summary for Monday, March 22, 2010

    CodePlex Daily Summary for Monday, March 22, 2010New Projects[Tool] Vczh Non-public DLL Classes Caller: Generate C# code for you to call non-public classes in DLLs very easily.Artefact Animator: Artefact Animator provides an easy to use framework for procedural time-based animations in Silverlight and WPF.cacheroo: Cacheroo is a social networking community that will make it easier for people who love geocaching to get connected.Data Processing Toolkit: An utility app to collected data from different sources (i.e. bugzilla bug reports) in a structured way. We are currently setting up the site. Mo...eXternal SQL Bridge (PHP): The eXternal SQL Bridge (XSB) allows you to bridge two websites together in a secure manner through pre-shared keys. XSB is resilient against repla...'G' - Language to Define Gestures for Touch Based Applications: A cross plat form multi-touch application framework with a language to define gestures. The application is build on Silverlight 4.0 and the languag...IIS Network Diagnostic Tools: Web implementation of "looking glass" like services (ping, traceroute) as HTTP modules for Internet Information Services.Interop Router: This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.L2 Commander: L2Commander makes it easier for both new and old l2j users to manage your server.You no longer have to waste time on finding the files you need and...MediaHelper: A utility to help clean up empty/unwanted files and folders in your filesystem.mhinze: matt hinze stuffOneMan: Focus on Silverlight and WCF technology.Rss Photo Frame Android Widget: RSS Photo Frame Android Widget permits showing pictures from any RSS feed on your Android device's desktopSingle Web Session: Web Tool Kits Current project provide developer with different tools that help to enhance web site performance, security, and other common functio...Work Item Visualization: Use DGML to visualize and analyze your TFS Work Items. Included is the ability to perform basic risk/impact analysis. It helps answer the question,...New Releases[Tool] Vczh Non-public DLL Classes Caller: Wrapper Coder (beta): Click "<Click Me To Open Assembly File>", WrapperCoder will load the assembly and referenced assembly. Check the non-public classes that you want...APS - Automatic Print Screen: APS 1.0: APS automatizes the tasks of paste the image in Paint and save it after print screen or alt+print screen. Choose directory, name and file extension...BTP Tools: e-Sword generator build 20100321: 1. Modify the indent after subtitle. 2. Add 2 spaces after subtitle.Combres - WebForm & MVC Client-side Resource Combine Library: Combres 2.0: Changes since last version (1.2) Support ignore Combres pipeline in debug mode - see issue #6088 Debug mode generates comment helping identify in...Desafio Office 2010 Brasil: DesafioOutlook: Controlando um robo com o Outlook 2010dylan.NET: dylan.NET v. 9.4: Adding Platform Invocation Services Support, full Managed Pointer Support, Charset,Dllimport,Callconv setting for P/Invoke, MarshalAs for parametersFamily Tree Analyzer: Version 1.3.2.0: Version 1.3.2.0 Add open folder button to IGI Search Form Fixes to Fact Location processing - IGIName renamed to RegionID Fix if Region ID not fou...Fasterflect - A Fast and Simple Reflection API: Fasterflect 2.0: We are pleased to release version 2.0 of Fasterflect, which contains a lot of additions and improvements from the previous version. Please refer t...IIS Network Diagnostic Tools: 1.0: Initial public release.Informant: Informant (Desktop) v0.1: This release allows users to send sms messages to 1-Many Groups or 1-Many contacts. It is a very basic release of the application. No styling has b...InfoService: InfoService v1.5 - MPE1 Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.InfoService: InfoService v1.5 - RAR Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.L2 Commander: Source Code Link: Where to find our source.ModularCMS: ModularCMS 1.2: Minor bug fixes.NMTools: NMTools-v40b0-20100321-0: The most noticeable aspect of this release is that NMTools is now an independent project. It will no longer tied to OpenSLIM. Nevertheless, OpenSLI...SharePoint LogViewer: SharePoint LogViewer 1.5.3: Log loading performance enhanced. Search text box now has auto complete feature.Single Web Session: Single Web Session: !Single Web Session! <httpModules> <add name="SingleSession" type="SingleWebSession.Model.WebSessionModule, SingleWebSession"/> </httpModules>Sprite Sheet Packer: 2.1 Release: Made a few crucial fixes from 2.0: - Fixed error with paths having spaces. - Fixed error with UI not unlocking. - Fixed NullReferenceException on ...uManage - AD Self-Service Portal: uManage v1.1 (.NET 4.0 RC): Updated Releasev1.1 Adds the primary ability to setup and configure the application through a setup wizard. The setup wizard will continue to evol...VCC: Latest build, v2.1.30321.0: Automatic drop of latest buildVS ChessMania: VS ChessMania V2 March Beta: Second Beta Release with move correction and making application more safe for user. New features will be added soon.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.9.00: Whats New Added New Toolbar Plugin (By Kent Safransk) 'MediaEmbed' to Include Embed Media from Youtube, Vimeo, etc. Media Embed Plugin Added New ...WeatherBar: WeatherBar 1.0 [No Installation]: Extract the ZIP archive and run WeatherBar.exe. Current release contains some bugs that will be fixed in the next version. Check the Issue Tracker...Work Item Visualization: Release 1.0: This is the initial release of the Work Item Visualization tool. There are no known issues when it comes to the visualization aspects of the tool b...WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.10: Version: 1.0.0.10 (Milestone 10): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...WPF AutoComplete TextBox Control: Version 1.2: What's Newadds AutoAppend feature adds a new provider: UrlHistoryDataProvider sample application is updated to reflect the new things Bug Fixe...ZoomBarPlus: V2 (Beta): - Fixed bug: if the active window changed while you were in the middle of a single tap delay, long tap delay, or swipe-repeat, it would continue re...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • How can a code editor effectively hint at code nesting level - without using indentation?

    - by pgfearo
    I've written an XML text editor that provides 2 view options for the same XML text, one indented (virtually), the other left-justified. The motivation for the left-justified view is to help users 'see' the whitespace characters they're using for indentation of plain-text or XPath code without interference from indentation that is an automated side-effect of the XML context. I want to provide visual clues (in the non-editable part of the editor) for the left-justified mode that will help the user, but without getting too elaborate. I tried just using connecting lines, but that seemed too busy. The best I've come up with so far is shown in a mocked up screenshot of the editor below, but I'm seeking better/simpler alternatives (that don't require too much code). [Edit] Taking the heatmap idea (from: @jimp) I get this and 3 alternatives - labelled a, b and c: The following section describes the accepted answer as a proposal, bringing together ideas from a number of other answers and comments. As this question is now community wiki, please feel free to update this. NestView The name for this idea which provides a visual method to improve the readability of nested code without using indentation. Contour Lines The name for the differently shaded lines within the NestView The image above shows the NestView used to help visualise an XML snippet. Though XML is used for this illustration, any other code syntax that uses nesting could have been used for this illustration. An Overview: The contour lines are shaded (as in a heatmap) to convey nesting level The contour lines are angled to show when a nesting level is being either opened or closed. A contour line links the start of a nesting level to the corresponding end. The combined width of contour lines give a visual impression of nesting level, in addition to the heatmap. The width of the NestView may be manually resizable, but should not change as the code changes. Contour lines can either be compressed or truncated to keep acheive this. Blank lines are sometimes used code to break up text into more digestable chunks. Such lines could trigger special behaviour in the NestView. For example the heatmap could be reset or a background color contour line used, or both. One or more contour lines associated with the currently selected code can be highlighted. The contour line associated with the selected code level would be emphasized the most, but other contour lines could also 'light up' in addition to help highlight the containing nested group Different behaviors (such as code folding or code selection) can be associated with clicking/double-clicking on a Contour Line. Different parts of a contour line (leading, middle or trailing edge) may have different dynamic behaviors associated. Tooltips can be shown on a mouse hover event over a contour line The NestView is updated continously as the code is edited. Where nesting is not well-balanced assumptions can be made where the nesting level should end, but the associated temporary contour lines must be highlighted in some way as a warning. Drag and drop behaviors of Contour Lines can be supported. Behaviour may vary according to the part of the contour line being dragged. Features commonly found in the left margin such as line numbering and colour highlighting for errors and change state could overlay the NestView. Additional Functionality The proposal addresses a range of additional issues - many are outside the scope of the original question, but a useful side-effect. Visually linking the start and end of a nested region The contour lines connect the start and end of each nested level Highlighting the context of the currently selected line As code is selected, the associated nest-level in the NestView can be highlighted Differentiating between code regions at the same nesting level In the case of XML different hues could be used for different namespaces. Programming languages (such as c#) support named regions that could be used in a similar way. Dividing areas within a nesting area into different visual blocks Extra lines are often inserted into code to aid readability. Such empty lines could be used to reset the saturation level of the NestView's contour lines. Multi-Column Code View Code without indentation makes the use of a multi-column view more effective because word-wrap or horizontal scrolling is less likely to be required. In this view, once code has reach the bottom of one column, it flows into the next one: Usage beyond merely providing a visual aid As proposed in the overview, the NestView could provide a range of editing and selection features which would be broadly in line with what is expected from a TreeView control. The key difference is that a typical TreeView node has 2 parts: an expander and the node icon. A NestView contour line can have as many as 3 parts: an opener (sloping), a connector (vertical) and a close (sloping). On Indentation The NestView presented alongside non-indented code complements, but is unlikely to replace, the conventional indented code view. It's likely that any solutions adopting a NestView, will provide a method to switch seamlessly between indented and non-indented code views without affecting any of the code text itself - including whitespace characters. One technique for the indented view would be 'Virtual Formatting' - where a dynamic left-margin is used in lieu of tab or space characters. The same nesting-level data used to dynamically render the NestView could also used for the more conventional-looking indented view. Printing Indentation will be important for the readability of printed code. Here, the absence of tab/space characters and a dynamic left-margin means that the text can wrap at the right-margin and still maintain the integrity of the indented view. Line numbers can be used as visual markers that indicate where code is word-wrapped and also the exact position of indentation: Screen Real-Estate: Flat Vs Indented Addressing the question of whether the NestView uses up valuable screen real-estate: Contour lines work well with a width the same as the code editor's character width. A NestView width of 12 character widths can therefore accommodate 12 levels of nesting before contour lines are truncated/compressed. If an indented view uses 3 character-widths for each nesting level then space is saved until nesting reaches 4 levels of nesting, after this nesting level the flat view has a space-saving advantage that increases with each nesting level. Note: A minimum indentation of 4 character widths is often recommended for code, however XML often manages with less. Also, Virtual Formatting permits less indentation to be used because there's no risk of alignment issues A comparison of the 2 views is shown below: Based on the above, its probably fair to conclude that view style choice will be based on factors other than screen real-estate. The one exception is where screen space is at a premium, for example on a Netbook/Tablet or when multiple code windows are open. In these cases, the resizable NestView would seem to be a clear winner. Use Cases Examples of real-world examples where NestView may be a useful option: Where screen real-estate is at a premium a. On devices such as tablets, notepads and smartphones b. When showing code on websites c. When multiple code windows need to be visible on the desktop simultaneously Where consistent whitespace indentation of text within code is a priority For reviewing deeply nested code. For example where sub-languages (e.g. Linq in C# or XPath in XSLT) might cause high levels of nesting. Accessibility Resizing and color options must be provided to aid those with visual impairments, and also to suit environmental conditions and personal preferences: Compatability of edited code with other systems A solution incorporating a NestView option should ideally be capable of stripping leading tab and space characters (identified as only having a formatting role) from imported code. Then, once stripped, the code could be rendered neatly in both the left-justified and indented views without change. For many users relying on systems such as merging and diff tools that are not whitespace-aware this will be a major concern (if not a complete show-stopper). Other Works: Visualisation of Overlapping Markup Published research by Wendell Piez, dated from 2004, addresses the issue of the visualisation of overlapping markup, specifically LMNL. This includes SVG graphics with significant similarities to the NestView proposal, as such, they are acknowledged here. The visual differences are clear in the images (below), the key functional distinction is that NestView is intended only for well-nested XML or code, whereas Wendell Piez's graphics are designed to represent overlapped nesting. The graphics above were reproduced - with kind permission - from http://www.piez.org Sources: Towards Hermenutic Markup Half-steps toward LMNL

    Read the article

  • Controlar Autentificaci&oacute;n Crystal Reports

    - by Jason Ulloa
    Para todos los que hemos trabajamos con Crystal Reports, no es un secreto que cuando tratamos de conectar nuestro reporte directamente a la base de datos, se nos viene encima el problema de autenticación. Es decir nuestro reporte al momento de iniciar la carga nos solicita autentificarnos en el servidor y sino lo hacemos, simplemente no veremos el reporte. Esto, además de ser tedioso para los usuarios se convierte en un problema de seguridad bastante grande, de ahí que en la mayoría de los casos se recomienda utilizar dataset. Sin embargo, para todos los que aún sabiendo esto no desean utilizar datasets, sino que, quieren conectar su crystal directamente veremos como implementar una pequeña clase que nos ayudará con esa tarea. Generalmente, cuando trabajamos con una aplicación web, nuestra cadena de conexión esta incluida en el web.config y también en muchas ocasiones contiene los datos como el usuario y password para acceder a la base de datos.  De esta cadena de conexión y estos datos es de los que nos ayudaremos para implementar la autentificación en el reporte. Generalmente, la cadena de conexión se vería así <connectionStrings> <remove name="LocalSqlServer"/> <add name="xxx" connectionString="Data Source=.\SqlExpress;Integrated Security=False;Initial Catalog=xxx;user id=myuser;password=mypass" providerName="System.Data.SqlClient"/> </connectionStrings>   Para nuestro ejemplo, nombraremos a nuestra clase CrystalRules (es solo algo que pensé de momento) 1. Primer Paso Creamos una variable de tipo SqlConnectionStringBuilder, a la cual le asignaremos la cadena de conexión que definimos en el web.config, y que luego utilizaremos para obtener los datos del usuario y el password para el crystal report. SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["xxx"].ConnectionString); 2. Implementación de propiedad Para ser más ordenados crearemos varias propiedad de tipo Privado, que se encargarán de recibir los datos de:   La Base de datos, el password, el usuario y el servidor private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } } 3. Creación del Método para aplicar los datos de conexión Una vez que ya tenemos las propiedades, asignaremos a las variables los valores que se han recogido en el SqlConnectionStringBuilder. Y crearemos una variable de tipo ConnectionInfo para aplicar los datos de conexión. internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   4. Creación del report document y aplicación de la seguridad Una vez recogidos los datos y asignados, crearemos un elemento report document al cual le asignaremos el CrystalReportViewer y le aplicaremos los datos de acceso que obtuvimos anteriormente public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; } Al final, nuestra clase completa ser vería así public class CrystalRules { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["Fatchoy.Data.Properties.Settings.FatchoyConnectionString"].ConnectionString);   private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } }   internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; }       #region instance   private static CrystalRules m_instance;   // Properties public static CrystalRules Instance { get { if (m_instance == null) { m_instance = new CrystalRules(); } return m_instance; } }   public DataDataContext m_DataContext { get { return DataDataContext.Instance; } }     #endregion instance   }   Si bien, la solución no es robusta y no es la mas segura. En casos de uso como una intranet y cuando estamos contra tiempo, podría ser de gran ayuda.

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Using SSIS to send a HTML E-Mail Message with built-in table of Counts.

    - by Kevin Shyr
    For the record, this can be just as easily done with a .NET class with a DLL call.  The two major reasons for this ending up as a SSIS package are: There are a lot of SQL resources for maintenance, but not as many .NET developers. There is an existing automated process that links up SQL Jobs (more on that in the next post), and this is part of that process.   To start, this is what the SSIS looks like: The first part of the control flow is just for the override scenario.   In the Execute SQL Task, it calls a stored procedure, which already formats the result into XML by using "FOR XML PATH('Row'), ROOT(N'FieldingCounts')".  The result XML string looks like this: <FieldingCounts>   <Row>     <CellId>M COD</CellId>     <Mailed>64</Mailed>     <ReMailed>210</ReMailed>     <TotalMail>274</TotalMail>     <EMailed>233</EMailed>     <TotalSent>297</TotalSent>   </Row>   <Row>     <CellId>M National</CellId>     <Mailed>11</Mailed>     <ReMailed>59</ReMailed>     <TotalMail>70</TotalMail>     <EMailed>90</EMailed>     <TotalSent>101</TotalSent>   </Row>   <Row>     <CellId>U COD</CellId>     <Mailed>91</Mailed>     <ReMailed>238</ReMailed>     <TotalMail>329</TotalMail>     <EMailed>291</EMailed>     <TotalSent>382</TotalSent>   </Row>   <Row>     <CellId>U National</CellId>     <Mailed>63</Mailed>     <ReMailed>286</ReMailed>     <TotalMail>349</TotalMail>     <EMailed>374</EMailed>     <TotalSent>437</TotalSent>   </Row> </FieldingCounts>  This result is saved into an internal SSIS variable with the following settings on the General tab and the Result Set tab:   Now comes the trickier part.  We need to use the XML Task to format the XML string result into an HTML table, and I used Direct input XSLT And here is the code of XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/>   <xsl:template match="/ROOT">         <table border="1" cellpadding="6">           <tr>             <td></td>             <td>Mailed</td>             <td>Re-mailed</td>             <td>Total Mail (Mailed, Re-mailed)</td>             <td>E-mailed</td>             <td>Total Sent (Mailed, E-mailed)</td>           </tr>           <xsl:for-each select="FieldingCounts/Row">             <tr>               <xsl:for-each select="./*">                 <td>                   <xsl:value-of select="." />                 </td>               </xsl:for-each>             </tr>           </xsl:for-each>         </table>   </xsl:template> </xsl:stylesheet>    Then a script task is used to send out an HTML email (as we are all painfully aware that SSIS Send Mail Task only sends plain text) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 using System; using System.Data; using Microsoft.SqlServer.Dts.Runtime; using System.Windows.Forms; using System.Net.Mail; using System.Net;   namespace ST_b829a2615e714bcfb55db0ce97be3901.csproj {     [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]     public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase     {           #region VSTA generated code         enum ScriptResults         {             Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,             Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure         };         #endregion           public void Main()         {             String EmailMsgBody = String.Format("<HTML><BODY><P>{0}</P><P>{1}</P></BODY></HTML>"                                                 , Dts.Variables["Config_SMTP_MessageSourceText"].Value.ToString()                                                 , Dts.Variables["InternalStr_CountResultAfterXSLT"].Value.ToString());             MailMessage EmailCountMsg = new MailMessage(Dts.Variables["Config_SMTP_From"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_Success_To"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_SubjectLinePrefix"].Value.ToString() + " " + Dts.Variables["InternalStr_FieldingDate"].Value.ToString()                                                         , EmailMsgBody);             //EmailCountMsg.From.             EmailCountMsg.CC.Add(Dts.Variables["Config_SMTP_Success_CC"].Value.ToString().Replace(";", ","));             EmailCountMsg.IsBodyHtml = true;               SmtpClient SMTPForCount = new SmtpClient(Dts.Variables["Config_SMTP_ServerAddress"].Value.ToString());             SMTPForCount.Credentials = CredentialCache.DefaultNetworkCredentials;               SMTPForCount.Send(EmailCountMsg);               Dts.TaskResult = (int)ScriptResults.Success;         }     } } Note on this code: notice the email list has Replace(";", ",").  This is only here because the list is configurable in the SQL Job Step at Set Values, which does not react well with colons as email separator, but system.Net.Mail only handles comma as email separator, hence the extra replace in the string. The result is a nicely formatted email message with count information:

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67  | Next Page >